Parks, Colleen M
2013-07-01
Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which suggest that the extent to which perceptual fluency matters on a recognition test depends in large part on the task demands. A test that recruits perceptual processing for discrimination should show greater perceptual effects and smaller conceptual effects than standard recognition, similar to the pattern of effects found in perceptual implicit memory tasks. This idea was tested in the current experiment by crossing a levels of processing manipulation with a modality manipulation on a series of recognition tests that ranged from conceptual (standard recognition) to very perceptually demanding (a speeded recognition test with degraded stimuli). Results showed that the levels of processing effect decreased and the effect of modality increased when tests were made perceptually demanding. These results support the idea that surface-level features influence performance on recognition tests when they are made salient by the task demands. PsycINFO Database Record (c) 2013 APA, all rights reserved.
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
Multitasking During Degraded Speech Recognition in School-Age Children
Ward, Kristina M.; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition. PMID:28105890
Multitasking During Degraded Speech Recognition in School-Age Children.
Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
Family environment influences emotion recognition following paediatric traumatic brain injury
SCHMIDT, ADAM T.; ORSTEN, KIMBERLEY D.; HANTEN, GERRI R.; LI, XIAOQI; LEVIN, HARVEY S.
2011-01-01
Objective This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). Methods A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Conclusions Findings suggest family functioning variables—especially financial resources—can influence performance on an emotional processing task following TBI in children. PMID:21058900
Famous face recognition, face matching, and extraversion.
Lander, Karen; Poyarekar, Siddhi
2015-01-01
It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.
Fine-grained recognition of plants from images.
Šulc, Milan; Matas, Jiří
2017-01-01
Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.
Cultural differences in self-recognition: the early development of autonomous and related selves?
Ross, Josephine; Yilmaz, Mandy; Dale, Rachel; Cassidy, Rose; Yildirim, Iraz; Suzanne Zeedyk, M
2017-05-01
Fifteen- to 18-month-old infants from three nationalities were observed interacting with their mothers and during two self-recognition tasks. Scottish interactions were characterized by distal contact, Zambian interactions by proximal contact, and Turkish interactions by a mixture of contact strategies. These culturally distinct experiences may scaffold different perspectives on self. In support, Scottish infants performed best in a task requiring recognition of the self in an individualistic context (mirror self-recognition), whereas Zambian infants performed best in a task requiring recognition of the self in a less individualistic context (body-as-obstacle task). Turkish infants performed similarly to Zambian infants on the body-as-obstacle task, but outperformed Zambians on the mirror self-recognition task. Verbal contact (a distal strategy) was positively related to mirror self-recognition and negatively related to passing the body-as-obstacle task. Directive action and speech (proximal strategies) were negatively related to mirror self-recognition. Self-awareness performance was best predicted by cultural context; autonomous settings predicted success in mirror self-recognition, and related settings predicted success in the body-as-obstacle task. These novel data substantiate the idea that cultural factors may play a role in the early expression of self-awareness. More broadly, the results highlight the importance of moving beyond the mark test, and designing culturally sensitive tests of self-awareness. © 2016 John Wiley & Sons Ltd.
Measuring listening effort: driving simulator vs. simple dual-task paradigm
Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth
2014-01-01
Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment. PMID:25083599
Family environment influences emotion recognition following paediatric traumatic brain injury.
Schmidt, Adam T; Orsten, Kimberley D; Hanten, Gerri R; Li, Xiaoqi; Levin, Harvey S
2010-01-01
This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Findings suggest family functioning variables--especially financial resources--can influence performance on an emotional processing task following TBI in children.
Emotion recognition in Parkinson's disease: Static and dynamic factors.
Wasser, Cory I; Evans, Felicity; Kempnich, Clare; Glikmann-Johnston, Yifat; Andrews, Sophie C; Thyagarajan, Dominic; Stout, Julie C
2018-02-01
The authors tested the hypothesis that Parkinson's disease (PD) participants would perform better in an emotion recognition task with dynamic (video) stimuli compared to a task using only static (photograph) stimuli and compared performances on both tasks to healthy control participants. In a within-subjects study, 21 PD participants and 20 age-matched healthy controls performed both static and dynamic emotion recognition tasks. The authors used a 2-way analysis of variance (controlling for individual participant variance) to determine the effect of group (PD, control) on emotion recognition performance in static and dynamic facial recognition tasks. Groups did not significantly differ in their performances on the static and dynamic tasks; however, the trend was suggestive that PD participants performed worse than controls. PD participants may have subtle emotion recognition deficits that are not ameliorated by the addition of contextual cues, similar to those found in everyday scenarios. Consistent with previous literature, the results suggest that PD participants may have underlying emotion recognition deficits, which may impact their social functioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Buratto, Luciano G.; Pottage, Claire L.; Brown, Charity; Morrison, Catriona M.; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated. PMID:25330251
Buratto, Luciano G; Pottage, Claire L; Brown, Charity; Morrison, Catriona M; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated.
The Effects of Aging and IQ on Item and Associative Memory
Ratcliff, Roger; Thapar, Anjali; McKoon, Gail
2011-01-01
The effects of aging and IQ on performance were examined in four memory tasks: item recognition, associative recognition, cued recall, and free recall. For item and associative recognition, accuracy and the response time distributions for correct and error responses were explained by Ratcliff’s (1978) diffusion model, at the level of individual participants. The values of the components of processing identified by the model for the recognition tasks, as well as accuracy for cued and free recall, were compared across levels of IQ ranging from 85 to 140 and age (college-age, 60-74 year olds, and 75-90 year olds). IQ had large effects on the quality of the evidence from memory on which decisions were based in the recognition tasks and accuracy in the recall tasks, except for the oldest participants for whom some of the measures were near floor values. Drift rates in the recognition tasks, accuracy in the recall tasks, and IQ all correlated strongly with each other. However, there was a small decline in drift rates for item recognition and a large decline for associative recognition and accuracy in cued recall (about 70 percent). In contrast, there were large age effects on boundary separation and nondecision time (which correlated across tasks), but little effect of IQ. The implications of these results for single- and dual- process models of item recognition are discussed and it is concluded that models that deal with both RTs and accuracy are subject to many more constraints than models that deal with only one of these measures. Overall, the results of the study show a complicated but interpretable pattern of interactions that present important targets for response time and memory models. PMID:21707207
Baez, Sandra; Marengo, Juan; Perez, Ana; Huepe, David; Font, Fernanda Giralt; Rial, Veronica; Gonzalez-Gadea, María Luz; Manes, Facundo; Ibanez, Agustin
2015-09-01
Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task-dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real-life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs. © 2014 The British Psychological Society.
Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551
Memory Asymmetry of Forward and Backward Associations in Recognition Tasks
Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han
2013-01-01
There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326
Effective connectivity of visual word recognition and homophone orthographic errors
Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A.; Gudayol-Ferré, Esteve
2015-01-01
The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages. PMID:26042070
The effect of the feeling of resolution and recognition performance on the revelation effect.
Miura, Hiroshi; Itoh, Yuji
2016-10-01
The fact that engaging in a cognitive task before a recognition task increases the probability of "old" responses is known as the revelation effect. We used several cognitive tasks to examine whether the feeling of resolution, a key construct of the occurrence mechanism of the revelation effect, is related to the occurrence of the revelation effect. The results show that the revelation effect was not caused by a visual search task, which elicited the feeling of resolution, but caused by an unsolvable anagram task and an articulatory suppression task, which did not elicit the feeling of resolution. These results suggest that the revelation effect is not related to the feeling of resolution. Moreover, the revelation effect was likely to occur in participants who performed poorly on the recognition task. The result suggests that the revelation effect is inclined to occur when people depend more on familiarity than on recollection process. Copyright © 2016 Elsevier Inc. All rights reserved.
Recognizing Biological Motion and Emotions from Point-Light Displays in Autism Spectrum Disorders
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P.; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in ‘reading’ body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of ‘biological motion’ and ‘emotions’ from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person’s ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance. PMID:22970227
Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.
Within-person adaptivity in frugal judgments from memory.
Filevich, Elisa; Horn, Sebastian S; Kühn, Simone
2017-12-22
Humans can exploit recognition memory as a simple cue for judgment. The utility of recognition depends on the interplay with the environment, particularly on its predictive power (validity) in a domain. It is, therefore, an important question whether people are sensitive to differences in recognition validity between domains. Strategic, intra-individual changes in the reliance on recognition have not been investigated so far. The present study fills this gap by scrutinizing within-person changes in using a frugal strategy, the recognition heuristic (RH), across two task domains that differed in recognition validity. The results showed adaptive changes in the reliance on recognition between domains. However, these changes were neither associated with the individual recognition validities nor with corresponding changes in these validities. These findings support a domain-adaptivity explanation, suggesting that people have broader intuitions about the usefulness of recognition across different domains that are nonetheless sufficiently robust for adaptive decision making. The analysis of metacognitive confidence reports mirrored and extended these results. Like RH use, confidence ratings covaried with task domain, but not with individual recognition validities. The changes in confidence suggest that people may have metacognitive access to information about global differences between task domains, but not to individual cue validities.
[Explicit memory for type font of words in source monitoring and recognition tasks].
Hatanaka, Yoshiko; Fujita, Tetsuya
2004-02-01
We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-09-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on basic emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of complex emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general population control group (n = 24). Children with ASC performed lower than controls on the task. Using task scores, more than 87% of the participants were allocated to their group. This new test quantifies complex emotion and mental state recognition in life-like situations. Our findings reveal that children with ASC have residual difficulties in this aspect of empathy. The use of language-based compensatory strategies for emotion recognition is discussed.
Stenbäck, Victoria; Hällgren, Mathias; Lyxell, Björn; Larsby, Birgitta
2015-06-01
Cognitive functions and speech-recognition-in-noise were evaluated with a cognitive test battery, assessing response inhibition using the Hayling task, working memory capacity (WMC) and verbal information processing, and an auditory test of speech recognition. The cognitive tests were performed in silence whereas the speech recognition task was presented in noise. Thirty young normally-hearing individuals participated in the study. The aim of the study was to investigate one executive function, response inhibition, and whether it is related to individual working memory capacity (WMC), and how speech-recognition-in-noise relates to WMC and inhibitory control. The results showed a significant difference between initiation and response inhibition, suggesting that the Hayling task taps cognitive activity responsible for executive control. Our findings also suggest that high verbal ability was associated with better performance in the Hayling task. We also present findings suggesting that individuals who perform well on tasks involving response inhibition, and WMC, also perform well on a speech-in-noise task. Our findings indicate that capacity to resist semantic interference can be used to predict performance on speech-in-noise tasks. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.
Wieckowski, Andrea Trubanova; White, Susan W
2017-01-01
Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.
ERIC Educational Resources Information Center
Farber, Ellen A.; Moely, Barbara E.
Results of two studies investigating children's abilities to use different kinds of cues to infer another's affective state are reported in this paper. In the first study, 48 children (3, 4, and 6 to 7 years of age) were given three different kinds of tasks (interpersonal task, facial recognition task, and vocal recognition task). A cross-age…
Relationship between listeners' nonnative speech recognition and categorization abilities
Atagi, Eriko; Bent, Tessa
2015-01-01
Enhancement of the perceptual encoding of talker characteristics (indexical information) in speech can facilitate listeners' recognition of linguistic content. The present study explored this indexical-linguistic relationship in nonnative speech processing by examining listeners' performance on two tasks: nonnative accent categorization and nonnative speech-in-noise recognition. Results indicated substantial variability across listeners in their performance on both the accent categorization and nonnative speech recognition tasks. Moreover, listeners' accent categorization performance correlated with their nonnative speech-in-noise recognition performance. These results suggest that having more robust indexical representations for nonnative accents may allow listeners to more accurately recognize the linguistic content of nonnative speech. PMID:25618098
Covert face recognition in congenital prosopagnosia: a group study.
Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max
2012-03-01
Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.
Preti, Emanuele; Richetin, Juliette; Suttora, Chiara; Pisani, Alberto
2016-04-30
Dysfunctions in social cognition characterize personality disorders. However, mixed results emerged from literature on emotion processing. Borderline Personality Disorder (BPD) traits are either associated with enhanced emotion recognition, impairments, or equal functioning compared to controls. These apparent contradictions might result from the complexity of emotion recognition tasks used and from individual differences in impulsivity and effortful control. We conducted a study in a sample of undergraduate students (n=80), assessing BPD traits, using an emotion recognition task that requires the processing of only visual information or both visual and acoustic information. We also measured individual differences in impulsivity and effortful control. Results demonstrated the moderating role of some components of impulsivity and effortful control on the capability of BPD traits in predicting anger and happiness recognition. We organized the discussion around the interaction between different components of regulatory functioning and task complexity for a better understanding of emotion recognition in BPD samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The role of visual imagery in the retention of information from sentences.
Drose, G S; Allen, G L
1994-01-01
We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.
Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan
2015-01-01
Objective To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. Methods We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. Results P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. Conclusions The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures. PMID:26529407
Does cortisol modulate emotion recognition and empathy?
Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja
2016-04-01
Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Deletion of the GluA1 AMPA receptor subunit impairs recency-dependent object recognition memory
Sanderson, David J.; Hindley, Emma; Smeaton, Emily; Denny, Nick; Taylor, Amy; Barkus, Chris; Sprengel, Rolf; Seeburg, Peter H.; Bannerman, David M.
2011-01-01
Deletion of the GluA1 AMPA receptor subunit impairs short-term spatial recognition memory. It has been suggested that short-term recognition depends upon memory caused by the recent presentation of a stimulus that is independent of contextual–retrieval processes. The aim of the present set of experiments was to test whether the role of GluA1 extends to nonspatial recognition memory. Wild-type and GluA1 knockout mice were tested on the standard object recognition task and a context-independent recognition task that required recency-dependent memory. In a first set of experiments it was found that GluA1 deletion failed to impair performance on either of the object recognition or recency-dependent tasks. However, GluA1 knockout mice displayed increased levels of exploration of the objects in both the sample and test phases compared to controls. In contrast, when the time that GluA1 knockout mice spent exploring the objects was yoked to control mice during the sample phase, it was found that GluA1 deletion now impaired performance on both the object recognition and the recency-dependent tasks. GluA1 deletion failed to impair performance on a context-dependent recognition task regardless of whether object exposure in knockout mice was yoked to controls or not. These results demonstrate that GluA1 is necessary for nonspatial as well as spatial recognition memory and plays an important role in recency-dependent memory processes. PMID:21378100
Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.
Convolutional neural networks and face recognition task
NASA Astrophysics Data System (ADS)
Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.
2017-09-01
Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.
ERIC Educational Resources Information Center
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-01-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on "basic" emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of "complex" emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general…
Facial emotion recognition is inversely correlated with tremor severity in essential tremor.
Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G
2014-04-01
We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.
Approach to recognition of flexible form for credit card expiration date recognition as example
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Nikolaev, Dmitry P.; Ingacheva, Anastasia; Skoryukina, Natalya
2015-12-01
In this paper we consider a task of finding information fields within document with flexible form for credit card expiration date field as example. We discuss main difficulties and suggest possible solutions. In our case this task is to be solved on mobile devices therefore computational complexity has to be as low as possible. In this paper we provide results of the analysis of suggested algorithm. Error distribution of the recognition system shows that suggested algorithm solves the task with required accuracy.
Glucose enhancement of a facial recognition task in young adults.
Metzger, M M
2000-02-01
Numerous studies have reported that glucose administration enhances memory processes in both elderly and young adult subjects. Although these studies have utilized a variety of procedures and paradigms, investigations of both young and elderly subjects have typically used verbal tasks (word list recall, paragraph recall, etc.). In the present study, the effect of glucose consumption on a nonverbal, facial recognition task in young adults was examined. Lemonade sweetened with either glucose (50 g) or saccharin (23.7 mg) was consumed by college students (mean age of 21.1 years) 15 min prior to a facial recognition task. The task consisted of a familiarization phase in which subjects were presented with "target" faces, followed immediately by a recognition phase in which subjects had to identify the targets among a random array of familiar target and novel "distractor" faces. Statistical analysis indicated that there were no differences on hit rate (target identification) for subjects who consumed either saccharin or glucose prior to the test. However, further analyses revealed that subjects who consumed glucose committed significantly fewer false alarms and had (marginally) higher d-prime scores (a signal detection measure) compared to subjects who consumed saccharin prior to the test. These results parallel a previous report demonstrating glucose enhancement of a facial recognition task in probable Alzheimer's patients; however, this is believed to be the first demonstration of glucose enhancement for a facial recognition task in healthy, young adults.
Bizot, Jean-Charles; Herpin, Alexandre; Pothion, Stéphanie; Pirot, Sylvain; Trovero, Fabrice; Ollat, Hélène
2005-07-01
The effect of a sulbutiamine chronic treatment on memory was studied in rats with a spatial delayed-non-match-to-sample (DNMTS) task in a radial maze and a two trial object recognition task. After completion of training in the DNMTS task, animals were subjected for 9 weeks to daily injections of either saline or sulbutiamine (12.5 or 25 mg/kg). Sulbutiamine did not modify memory in the DNMTS task but improved it in the object recognition task. Dizocilpine, impaired both acquisition and retention of the DNMTS task in the saline-treated group, but not in the two sulbutiamine-treated groups, suggesting that sulbutiamine may counteract the amnesia induced by a blockade of the N-methyl-D-aspartate glutamate receptors. Taken together, these results are in favor of a beneficial effect of sulbutiamine on working and episodic memory.
Age-related differences in listening effort during degraded speech recognition
Ward, Kristina M.; Shen, Jing; Souza, Pamela E.; Grieco-Calub, Tina M.
2016-01-01
Objectives The purpose of the current study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Design Twenty-five younger adults (18–24 years) and twenty-one older adults (56–82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants’ responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners’ performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (baseline vs. dual task); and (3) per group (younger vs. older adults). Results Speech recognition declined with increasing spectral degradation for both younger and older adults when they performed the task in isolation or concurrently with the visual monitoring task. Older adults were slower and less accurate than younger adults on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared to single-task performance, older adults experienced greater declines in secondary-task accuracy, but not reaction time, than younger adults. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. Conclusions Older adults experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than younger adults. These findings are interpreted as suggesting that older listeners expended greater listening effort than younger listeners, and may be partially attributed to age-related differences in executive control. PMID:27556526
Holistic processing, contact, and the other-race effect in face recognition.
Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle
2014-12-01
Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Acquired prosopagnosia without word recognition deficits.
Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley
2015-01-01
It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.
Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike
2016-02-01
Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed. © 2015 The British Psychological Society.
Multi-task learning with group information for human action recognition
NASA Astrophysics Data System (ADS)
Qian, Li; Wu, Song; Pu, Nan; Xu, Shulin; Xiao, Guoqiang
2018-04-01
Human action recognition is an important and challenging task in computer vision research, due to the variations in human motion performance, interpersonal differences and recording settings. In this paper, we propose a novel multi-task learning framework with group information (MTL-GI) for accurate and efficient human action recognition. Specifically, we firstly obtain group information through calculating the mutual information according to the latent relationship between Gaussian components and action categories, and clustering similar action categories into the same group by affinity propagation clustering. Additionally, in order to explore the relationships of related tasks, we incorporate group information into multi-task learning. Experimental results evaluated on two popular benchmarks (UCF50 and HMDB51 datasets) demonstrate the superiority of our proposed MTL-GI framework.
Chatterjee, Monita; Peng, Shu-Chen
2008-01-01
Fundamental frequency (F0) processing by cochlear implant (CI) listeners was measured using a psychophysical task and a speech intonation recognition task. Listeners' Weber fractions for modulation frequency discrimination were measured using an adaptive, 3-interval, forced-choice paradigm: stimuli were presented through a custom research interface. In the speech intonation recognition task, listeners were asked to indicate whether resynthesized bisyllabic words, when presented in the free field through the listeners' everyday speech processor, were question-like or statement-like. The resynthesized tokens were systematically manipulated to have different initial-F0s to represent male vs. female voices, and different F0 contours (i.e. falling, flat, and rising) Although the CI listeners showed considerable variation in performance on both tasks, significant correlations were observed between the CI listeners' sensitivity to modulation frequency in the psychophysical task and their performance in intonation recognition. Consistent with their greater reliance on temporal cues, the CI listeners' performance in the intonation recognition task was significantly poorer with the higher initial-F0 stimuli than with the lower initial-F0 stimuli. Similar results were obtained with normal hearing listeners attending to noiseband-vocoded CI simulations with reduced spectral resolution.
Chatterjee, Monita; Peng, Shu-Chen
2008-01-01
Fundamental frequency (F0) processing by cochlear implant (CI) listeners was measured using a psychophysical task and a speech intonation recognition task. Listeners’ Weber fractions for modulation frequency discrimination were measured using an adaptive, 3-interval, forced-choice paradigm: stimuli were presented through a custom research interface. In the speech intonation recognition task, listeners were asked to indicate whether resynthesized bisyllabic words, when presented in the free field through the listeners’ everyday speech processor, were question-like or statement-like. The resynthesized tokens were systematically manipulated to have different initial F0s to represent male vs. female voices, and different F0 contours (i.e., falling, flat, and rising) Although the CI listeners showed considerable variation in performance on both tasks, significant correlations were observed between the CI listeners’ sensitivity to modulation frequency in the psychophysical task and their performance in intonation recognition. Consistent with their greater reliance on temporal cues, the CI listeners’ performance in the intonation recognition task was significantly poorer with the higher initial-F0 stimuli than with the lower initial-F0 stimuli. Similar results were obtained with normal hearing listeners attending to noiseband-vocoded CI simulations with reduced spectral resolution. PMID:18093766
Distinct roles of basal forebrain cholinergic neurons in spatial and object recognition memory.
Okada, Kana; Nishizawa, Kayo; Kobayashi, Tomoko; Sakata, Shogo; Kobayashi, Kazuto
2015-08-06
Recognition memory requires processing of various types of information such as objects and locations. Impairment in recognition memory is a prominent feature of amnesia and a symptom of Alzheimer's disease (AD). Basal forebrain cholinergic neurons contain two major groups, one localized in the medial septum (MS)/vertical diagonal band of Broca (vDB), and the other in the nucleus basalis magnocellularis (NBM). The roles of these cell groups in recognition memory have been debated, and it remains unclear how they contribute to it. We use a genetic cell targeting technique to selectively eliminate cholinergic cell groups and then test spatial and object recognition memory through different behavioural tasks. Eliminating MS/vDB neurons impairs spatial but not object recognition memory in the reference and working memory tasks, whereas NBM elimination undermines only object recognition memory in the working memory task. These impairments are restored by treatment with acetylcholinesterase inhibitors, anti-dementia drugs for AD. Our results highlight that MS/vDB and NBM cholinergic neurons are not only implicated in recognition memory but also have essential roles in different types of recognition memory.
Yamashita, Wakayo; Wang, Gang; Tanaka, Keiji
2010-01-01
One usually fails to recognize an unfamiliar object across changes in viewing angle when it has to be discriminated from similar distractor objects. Previous work has demonstrated that after long-term experience in discriminating among a set of objects seen from the same viewing angle, immediate recognition of the objects across 30-60 degrees changes in viewing angle becomes possible. The capability for view-invariant object recognition should develop during the within-viewing-angle discrimination, which includes two kinds of experience: seeing individual views and discriminating among the objects. The aim of the present study was to determine the relative contribution of each factor to the development of view-invariant object recognition capability. Monkeys were first extensively trained in a task that required view-invariant object recognition (Object task) with several sets of objects. The animals were then exposed to a new set of objects over 26 days in one of two preparatory tasks: one in which each object view was seen individually, and a second that required discrimination among the objects at each of four viewing angles. After the preparatory period, we measured the monkeys' ability to recognize the objects across changes in viewing angle, by introducing the object set to the Object task. Results indicated significant view-invariant recognition after the second but not first preparatory task. These results suggest that discrimination of objects from distractors at each of several viewing angles is required for the development of view-invariant recognition of the objects when the distractors are similar to the objects.
Recall and recognition of verbal paired associates in early Alzheimer's disease.
Lowndes, G J; Saling, M M; Ames, D; Chiu, E; Gonzalez, L M; Savage, G R
2008-07-01
The primary impairment in early Alzheimer's disease (AD) is encoding/consolidation, resulting from medial temporal lobe (MTL) pathology. AD patients perform poorly on cued-recall paired associate learning (PAL) tasks, which assess the ability of the MTLs to encode relational memory. Since encoding and retrieval processes are confounded within performance indexes on cued-recall PAL, its specificity for AD is limited. Recognition paradigms tend to show good specificity for AD, and are well tolerated, but are typically less sensitive than recall tasks. Associate-recognition is a novel PAL task requiring a combination of recall and recognition processes. We administered a verbal associate-recognition test and cued-recall analogue to 22 early AD patients and 55 elderly controls to compare their ability to discriminate these groups. Both paradigms used eight arbitrarily related word pairs (e.g., pool-teeth) with varying degrees of imageability. Associate-recognition was equally effective as the cued-recall analogue in discriminating the groups, and logistic regression demonstrated classification rates by both tasks were equivalent. These preliminary findings provide support for the clinical value of this recognition tool. Conceptually it has potential for greater specificity in informing neuropsychological diagnosis of AD in clinical samples but this requires further empirical support.
Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh
2004-11-01
Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.
Sign Perception and Recognition in Non-Native Signers of ASL
Morford, Jill P.; Carlson, Martina L.
2011-01-01
Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition. PMID:21686080
Age-Related Differences in Listening Effort During Degraded Speech Recognition.
Ward, Kristina M; Shen, Jing; Souza, Pamela E; Grieco-Calub, Tina M
The purpose of the present study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Twenty-five younger adults (YA; 18-24 years) and 21 older adults (OA; 56-82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants' responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners' performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (single task vs. dual task); and (3) per group (YA vs. OA). Speech recognition declined with increasing spectral degradation for both YA and OA when they performed the task in isolation or concurrently with the visual monitoring task. OA were slower and less accurate than YA on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared with single-task performance, OA experienced greater declines in secondary-task accuracy, but not reaction time, than YA. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. OA experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than YA. These findings are interpreted as suggesting that OA expended greater listening effort than YA, which may be partially attributed to age-related differences in executive control.
Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan
2015-01-01
To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Mind wandering in text comprehension under dual-task conditions.
Dixon, Peter; Li, Henry
2013-01-01
In two experiments, subjects responded to on-task probes while reading under dual-task conditions. The secondary task was to monitor the text for occurrences of the letter e. In Experiment 1, reading comprehension was assessed with a multiple-choice recognition test; in Experiment 2, subjects recalled the text. In both experiments, the secondary task replicated the well-known "missing-letter effect" in which detection of e's was less effective for function words and the word "the." Letter detection was also more effective when subjects were on task, but this effect did not interact with the missing-letter effect. Comprehension was assessed in both the dual-task conditions and in control single-task conditions. In the single-task conditions, both recognition (Experiment 1) and recall (Experiment 2) was better when subjects were on task, replicating previous research on mind wandering. Surprisingly, though, comprehension under dual-task conditions only showed an effect of being on task when measured with recall; there was no effect on recognition performance. Our interpretation of this pattern of results is that subjects generate responses to on-task probes on the basis of a retrospective assessment of the contents of working memory. Further, we argue that under dual-task conditions, the contents of working memory is not closely related to the reading processes required for accurate recognition performance. These conclusions have implications for models of text comprehension and for the interpretation of on-task probe responses.
Mind wandering in text comprehension under dual-task conditions
Dixon, Peter; Li, Henry
2013-01-01
In two experiments, subjects responded to on-task probes while reading under dual-task conditions. The secondary task was to monitor the text for occurrences of the letter e. In Experiment 1, reading comprehension was assessed with a multiple-choice recognition test; in Experiment 2, subjects recalled the text. In both experiments, the secondary task replicated the well-known “missing-letter effect” in which detection of e's was less effective for function words and the word “the.” Letter detection was also more effective when subjects were on task, but this effect did not interact with the missing-letter effect. Comprehension was assessed in both the dual-task conditions and in control single-task conditions. In the single-task conditions, both recognition (Experiment 1) and recall (Experiment 2) was better when subjects were on task, replicating previous research on mind wandering. Surprisingly, though, comprehension under dual-task conditions only showed an effect of being on task when measured with recall; there was no effect on recognition performance. Our interpretation of this pattern of results is that subjects generate responses to on-task probes on the basis of a retrospective assessment of the contents of working memory. Further, we argue that under dual-task conditions, the contents of working memory is not closely related to the reading processes required for accurate recognition performance. These conclusions have implications for models of text comprehension and for the interpretation of on-task probe responses. PMID:24101909
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N
2015-06-01
The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population without psychiatric diagnosis. The study also allowed the identification of cut-off scores; results below 2 standard deviations of the healthy control average (61.57%) pointed to a facial affect recognition deficit. The TREF appears to be a useful tool to identify facial affects recognition impairment in schizophrenia. Neuropsychologists, who have tried this task, have positive feedback. The TREF is easy to use (duration of about 15 minutes), easy to apply in subjects with attentional difficulties, and tests facial affects recognition at ecological intensity levels. These results have to be confirmed in the future with larger sample sizes, and in comparison with other tasks, evaluating the facial affects recognition processes. Copyright © 2014 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Method of determining the necessary number of observations for video stream documents recognition
NASA Astrophysics Data System (ADS)
Arlazarov, Vladimir V.; Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Janiszewski, Igor
2018-04-01
This paper discusses a task of document recognition on a sequence of video frames. In order to optimize the processing speed an estimation is performed of stability of recognition results obtained from several video frames. Considering identity document (Russian internal passport) recognition on a mobile device it is shown that significant decrease is possible of the number of observations necessary for obtaining precise recognition result.
Zeintl, Melanie; Kliegel, Matthias
2010-01-01
Generally, older adults perform worse than younger adults in complex working memory span tasks. So far, it is unclear which processes mainly contribute to age-related differences in working memory span. The aim of the present study was to investigate age effects and the roles of proactive and coactive interference in a recognition-based version of the operation span task. Younger and older adults performed standard versions and distracter versions of the operation span task. At retrieval, participants had to recognize target words in word lists containing targets as well as proactive and/or coactive interference-related lures. Results show that, overall, younger adults outperformed older adults in the recognition of target words. Furthermore, analyses of error types indicate that, while younger adults were only affected by simultaneously presented distracter words, older adults had difficulties with both proactive and coactive interference. Results suggest that age effects in complex span tasks may not be mainly due to retrieval deficits in old age. Copyright 2009 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Malmberg, Kenneth J.; Annis, Jeffrey
2012-01-01
Many models of recognition are derived from models originally applied to perception tasks, which assume that decisions from trial to trial are independent. While the independence assumption is violated for many perception tasks, we present the results of several experiments intended to relate memory and perception by exploring sequential…
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).
Familiarity and Recollection in Heuristic Decision Making
Schwikert, Shane R.; Curran, Tim
2014-01-01
Heuristics involve the ability to utilize memory to make quick judgments by exploiting fundamental cognitive abilities. In the current study we investigated the memory processes that contribute to the recognition heuristic and the fluency heuristic, which are both presumed to capitalize on the by-products of memory to make quick decisions. In Experiment 1, we used a city-size comparison task while recording event-related potentials (ERPs) to investigate the potential contributions of familiarity and recollection to the two heuristics. ERPs were markedly different for recognition heuristic-based decisions and fluency heuristic-based decisions, suggesting a role for familiarity in the recognition heuristic and recollection in the fluency heuristic. In Experiment 2, we coupled the same city-size comparison task with measures of subjective pre-experimental memory for each stimulus in the task. Although previous literature suggests the fluency heuristic relies on recognition speed alone, our results suggest differential contributions of recognition speed and recollected knowledge to these decisions, whereas the recognition heuristic relies on familiarity. Based on these results, we created a new theoretical frame work that explains decisions attributed to both heuristics based on the underlying memory associated with the choice options. PMID:25347534
Familiarity and recollection in heuristic decision making.
Schwikert, Shane R; Curran, Tim
2014-12-01
Heuristics involve the ability to utilize memory to make quick judgments by exploiting fundamental cognitive abilities. In the current study we investigated the memory processes that contribute to the recognition heuristic and the fluency heuristic, which are both presumed to capitalize on the byproducts of memory to make quick decisions. In Experiment 1, we used a city-size comparison task while recording event-related potentials (ERPs) to investigate the potential contributions of familiarity and recollection to the 2 heuristics. ERPs were markedly different for recognition heuristic-based decisions and fluency heuristic-based decisions, suggesting a role for familiarity in the recognition heuristic and recollection in the fluency heuristic. In Experiment 2, we coupled the same city-size comparison task with measures of subjective preexperimental memory for each stimulus in the task. Although previous literature suggests the fluency heuristic relies on recognition speed alone, our results suggest differential contributions of recognition speed and recollected knowledge to these decisions, whereas the recognition heuristic relies on familiarity. Based on these results, we created a new theoretical framework that explains decisions attributed to both heuristics based on the underlying memory associated with the choice options. PsycINFO Database Record (c) 2014 APA, all rights reserved.
The effects of sleep deprivation on item and associative recognition memory.
Ratcliff, Roger; Van Dongen, Hans P A
2018-02-01
Sleep deprivation adversely affects the ability to perform cognitive tasks, but theories range from predicting an overall decline in cognitive functioning because of reduced stability in attentional networks to specific deficits in various cognitive domains or processes. We measured the effects of sleep deprivation on two memory tasks, item recognition ("was this word in the list studied") and associative recognition ("were these two words studied in the same pair"). These tasks test memory for information encoded a few minutes earlier and so do not address effects of sleep deprivation on working memory or consolidation after sleep. A diffusion model was used to decompose accuracy and response time distributions to produce parameter estimates of components of cognitive processing. The model assumes that over time, noisy evidence from the task stimulus is accumulated to one of two decision criteria, and parameters governing this process are extracted and interpreted in terms of distinct cognitive processes. Results showed that sleep deprivation reduces drift rate (evidence used in the decision process), with little effect on the other components of the decision process. These results contrast with the effects of aging, which show little decline in item recognition but large declines in associative recognition. The results suggest that sleep deprivation degrades the quality of information stored in memory and that this may occur through degraded attentional processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Emotion Recognition in Frontotemporal Dementia and Alzheimer's Disease: A New Film-Based Assessment
Goodkind, Madeleine S.; Sturm, Virginia E.; Ascher, Elizabeth A.; Shdo, Suzanne M.; Miller, Bruce L.; Rankin, Katherine P.; Levenson, Robert W.
2015-01-01
Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests. PMID:26010574
Some Memories Are Odder than Others: Judgments of Episodic Oddity Violate Known Decision Rules
ERIC Educational Resources Information Center
O'Connor, Akira R.; Guhl, Emily N.; Cox, Justin C.; Dobbins, Ian G.
2011-01-01
Current decision models of recognition memory are based almost entirely on one paradigm, single item old/new judgments accompanied by confidence ratings. This task results in receiver operating characteristics (ROCs) that are well fit by both signal-detection and dual-process models. Here we examine an entirely new recognition task, the judgment…
Biometric recognition via texture features of eye movement trajectories in a visual searching task.
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.
Biometric recognition via texture features of eye movement trajectories in a visual searching task
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383
Assessment of Self-Recognition in Young Children with Handicaps.
ERIC Educational Resources Information Center
Kelley, Michael F.; And Others
1988-01-01
Thirty young children with handicaps were assessed on five self-recognition mirror tasks. The set of tasks formed a reproducible scale, indicating that these tasks are an appropriate measure of self-recognition in this population. Data analysis suggested that stage of self-recognition is positively and significantly related to cognitive…
2016-01-01
Objective: Memory deficits in patients with frontal lobe lesions are most apparent on free recall tasks that require the selection, initiation, and implementation of retrieval strategies. The effect of frontal lesions on recognition memory performance is less clear with some studies reporting recognition memory impairments but others not. The majority of these studies do not directly compare recall and recognition within the same group of frontal patients, assessing only recall or recognition memory performance. Other studies that do compare recall and recognition in the same frontal group do not consider recall or recognition tests that are comparable for difficulty. Recognition memory impairments may not be reported because recognition memory tasks are less demanding. Method: This study aimed to investigate recall and recognition impairments in the same group of 47 frontal patients and 78 healthy controls. The Doors and People Test was administered as a neuropsychological test of memory as it assesses both verbal and visual recall and recognition using subtests that are matched for difficulty. Results: Significant verbal and visual recall and recognition impairments were found in the frontal patients. Conclusion: These results demonstrate that when frontal patients are assessed on recall and recognition memory tests of comparable difficulty, memory impairments are found on both types of episodic memory test. PMID:26752123
The own-age face recognition bias is task dependent.
Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J
2015-08-01
The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.
Oh, Jooyoung; Chun, Ji-Won; Kim, Eunseong; Park, Hae-Jeong; Lee, Boreom; Kim, Jae-Jin
2017-01-01
Patients with schizophrenia exhibit several cognitive deficits, including memory impairment. Problems with recognition memory can hinder socially adaptive behavior. Previous investigations have suggested that altered activation of the frontotemporal area plays an important role in recognition memory impairment. However, the cerebral networks related to these deficits are not known. The aim of this study was to elucidate the brain networks required for recognizing socially relevant information in patients with schizophrenia performing an old-new recognition task. Sixteen patients with schizophrenia and 16 controls participated in this study. First, the subjects performed the theme-identification task during functional magnetic resonance imaging. In this task, pictures depicting social situations were presented with three words, and the subjects were asked to select the best theme word for each picture. The subjects then performed an old-new recognition task in which they were asked to discriminate whether the presented words were old or new. Task performance and neural responses in the old-new recognition task were compared between the subject groups. An independent component analysis of the functional connectivity was performed. The patients with schizophrenia exhibited decreased discriminability and increased activation of the right superior temporal gyrus compared with the controls during correct responses. Furthermore, aberrant network activities were found in the frontopolar and language comprehension networks in the patients. The functional connectivity analysis showed aberrant connectivity in the frontopolar and language comprehension networks in the patients with schizophrenia, and these aberrations possibly contribute to their low recognition performance and social dysfunction. These results suggest that the frontopolar and language comprehension networks are potential therapeutic targets in patients with schizophrenia.
Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang
2016-06-01
The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.
Relational and item-specific influences on generate-recognize processes in recall.
Guynn, Melissa J; McDaniel, Mark A; Strosser, Garrett L; Ramirez, Juan M; Castleberry, Erica H; Arnett, Kristen H
2014-02-01
The generate-recognize model and the relational-item-specific distinction are two approaches to explaining recall. In this study, we consider the two approaches in concert. Following Jacoby and Hollingshead (Journal of Memory and Language 29:433-454, 1990), we implemented a production task and a recognition task following production (1) to evaluate whether generation and recognition components were evident in cued recall and (2) to gauge the effects of relational and item-specific processing on these components. An encoding task designed to augment item-specific processing (anagram-transposition) produced a benefit on the recognition component (Experiments 1-3) but no significant benefit on the generation component (Experiments 1-3), in the context of a significant benefit to cued recall. By contrast, an encoding task designed to augment relational processing (category-sorting) did produce a benefit on the generation component (Experiment 3). These results converge on the idea that in recall, item-specific processing impacts a recognition component, whereas relational processing impacts a generation component.
Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients
Peng, Shu-Chen; Lu, Hui-Ping; Lu, Nelson; Lin, Yung-Song; Deroche, Mickael L. D.
2017-01-01
Purpose The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. Method Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally uttered words that were contrastive in lexical tones. For Task 2, a disyllabic word (yanjing) was manipulated orthogonally, varying in fundamental-frequency (F0) contours and duration patterns. Participants identified each token with the second syllable jing pronounced with Tone 1 (a high level tone) as eyes or with Tone 4 (a high falling tone) as eyeglasses. Results CI participants' recognition accuracy was significantly lower than NH listeners' in Task 1. In Task 2, CI participants' reliance on F0 contours was significantly less than that of NH listeners; their reliance on duration patterns, however, was significantly higher than that of NH listeners. Both CI and NH listeners' performance in Task 1 was significantly correlated with their reliance on F0 contours in Task 2. Conclusion For pediatric CI recipients, lexical-tone recognition using naturally uttered words is primarily related to their reliance on F0 contours, although duration patterns may be used as an additional cue. PMID:28388709
Application of advanced speech technology in manned penetration bombers
NASA Astrophysics Data System (ADS)
North, R.; Lea, W.
1982-03-01
This report documents research on the potential use of speech technology in a manned penetration bomber aircraft (B-52/G and H). The objectives of the project were to analyze the pilot/copilot crewstation tasks over a three-hour-and forty-minute mission and determine the tasks that would benefit the most from conversion to speech recognition/generation, determine the technological feasibility of each of the identified tasks, and prioritize these tasks based on these criteria. Secondary objectives of the program were to enunciate research strategies in the application of speech technologies in airborne environments, and develop guidelines for briefing user commands on the potential of using speech technologies in the cockpit. The results of this study indicated that for the B-52 crewmember, speech recognition would be most beneficial for retrieving chart and procedural data that is contained in the flight manuals. Technological feasibility of these tasks indicated that the checklist and procedural retrieval tasks would be highly feasible for a speech recognition system.
Emotion recognition in frontotemporal dementia and Alzheimer's disease: A new film-based assessment.
Goodkind, Madeleine S; Sturm, Virginia E; Ascher, Elizabeth A; Shdo, Suzanne M; Miller, Bruce L; Rankin, Katherine P; Levenson, Robert W
2015-08-01
Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests. (c) 2015 APA, all rights reserved).
The Costs and Benefits of Testing and Guessing on Recognition Memory
Huff, Mark J.; Balota, David A.; Hutchison, Keith A.
2016-01-01
We examined whether two types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler tasks, particularly when study lists were semantically related. However, both retrieval practice and guessing also generally inflated false recognition for the non-presented critical words. These patterns were found when final recognition was completed during a short delay within the same experimental session (Experiment 1) and following a 24-hr delay (Experiment 2). In Experiment 3, task instructions were presented randomly after each list to determine whether retrieval-practice and guessing effects were influenced by task-expectancy processes. In contrast to Experiments 1 and 2, final recognition following retrieval practice and guessing was equivalent to restudy, suggesting that the observed retrieval-practice and guessing advantages were in part due to preparatory task-based processing during study. PMID:26950490
The posterior parietal cortex in recognition memory: a neuropsychological study.
Haramati, Sharon; Soroker, Nachum; Dudai, Yadin; Levy, Daniel A
2008-01-01
Several recent functional neuroimaging studies have reported robust bilateral activation (L>R) in lateral posterior parietal cortex and precuneus during recognition memory retrieval tasks. It has not yet been determined what cognitive processes are represented by those activations. In order to examine whether parietal lobe-based processes are necessary for basic episodic recognition abilities, we tested a group of 17 first-incident CVA patients whose cortical damage included (but was not limited to) extensive unilateral posterior parietal lesions. These patients performed a series of tasks that yielded parietal activations in previous fMRI studies: yes/no recognition judgments on visual words and on colored object pictures and identifiable environmental sounds. We found that patients with left hemisphere lesions were not impaired compared to controls in any of the tasks. Patients with right hemisphere lesions were not significantly impaired in memory for visual words, but were impaired in recognition of object pictures and sounds. Two lesion--behavior analyses--area-based correlations and voxel-based lesion symptom mapping (VLSM)---indicate that these impairments resulted from extra-parietal damage, specifically to frontal and lateral temporal areas. These findings suggest that extensive parietal damage does not impair recognition performance. We suggest that parietal activations recorded during recognition memory tasks might reflect peri-retrieval processes, such as the storage of retrieved memoranda in a working memory buffer for further cognitive processing.
Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.
Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas
2015-08-01
Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.
Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease
Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas
2016-01-01
Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443
Stages of processing in associative recognition: evidence from behavior, EEG, and classification.
Borst, Jelmer P; Schneider, Darryl W; Walsh, Matthew M; Anderson, John R
2013-12-01
In this study, we investigated the stages of information processing in associative recognition. We recorded EEG data while participants performed an associative recognition task that involved manipulations of word length, associative fan, and probe type, which were hypothesized to affect the perceptual encoding, retrieval, and decision stages of the recognition task, respectively. Analyses of the behavioral and EEG data, supplemented with classification of the EEG data using machine-learning techniques, provided evidence that generally supported the sequence of stages assumed by a computational model developed in the Adaptive Control of Thought-Rational cognitive architecture. However, the results suggested a more complex relationship between memory retrieval and decision-making than assumed by the model. Implications of the results for modeling associative recognition are discussed. The study illustrates how a classifier approach, in combination with focused manipulations, can be used to investigate the timing of processing stages.
Perceptual fluency and affect without recognition.
Anand, P; Sternthal, B
1991-05-01
A dichotic listening task was used to investigate the affect-without-recognition phenomenon. Subjects performed a distractor task by responding to the information presented in one ear while ignoring the target information presented in the other ear. The subjects' recognition of and affect toward the target information as well as toward foils was measured. The results offer evidence for the affect-without-recognition phenomenon. Furthermore, the data suggest that the subjects' affect toward the stimuli depended primarily on the extent to which the stimuli were perceived as familiar (i.e., subjective familiarity), and this perception was influenced by the ear in which the distractor or the target information was presented. These data are interpreted in terms of current models of recognition memory and hemispheric lateralization.
Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M
2002-01-01
Aims: (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. Methods: 30 subjects with AMD (age range 66–90 years; visual acuity 0.4–1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Results: Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = −0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = −0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Conclusion: Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance. PMID:12185131
The CC chemokine receptor 5 regulates olfactory and social recognition in mice.
Kalkonde, Y V; Shelton, R; Villarreal, M; Sigala, J; Mishra, P K; Ahuja, S S; Barea-Rodriguez, E; Moretti, P; Ahuja, S K
2011-12-01
Chemokines are chemotactic cytokines that regulate cell migration and are thought to play an important role in a broad range of inflammatory diseases. The availability of chemokine receptor blockers makes them an important therapeutic target. In vitro, chemokines are shown to modulate neurotransmission. However, it is not very clear if chemokines play a role in behavior and cognition. Here we evaluated the role of CC chemokine receptor 5 (CCR5) in various behavioral tasks in mice using Wt (Ccr5⁺/⁺) and Ccr5-null (Ccr5⁻/⁻)mice. Ccr5⁻/⁻ mice showed enhanced social recognition. Administration of CC chemokine ligand 3 (CCL3), one of the CCR5-ligands, impaired social recognition. Since the social recognition task is dependent on the sense of olfaction, we tested olfactory recognition for social and non-social scents in these mice. Ccr5⁻/⁻ mice had enhanced olfactory recognition for both these scents indicating that enhanced performance in social recognition task could be due to enhanced olfactory recognition in these mice. Spatial memory and aversive memory were comparable in Wt and Ccr5⁻/⁻ mice. Collectively, these results suggest that chemokines/chemokine receptors might play an important role in olfactory recognition tasks in mice and to our knowledge represents the first direct demonstration of an in vivo role of CCR5 in modulating social behavior in mice. These studies are important as CCR5 blockers are undergoing clinical trials and can potentially modulate behavior. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.
Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro
2011-12-01
The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.
The word-frequency paradox for recall/recognition occurs for pictures.
Karlsen, Paul Johan; Snodgrass, Joan Gay
2004-08-01
A yes-no recognition task and two recall tasks were conducted using pictures of high and low familiarity ratings. Picture familiarity had analogous effects to word frequency, and replicated the word-frequency paradox in recall and recognition. Low-familiarity pictures were more recognizable than high-familiarity pictures, pure lists of high-familiarity pictures were more recallable than pure lists of low-familiarity pictures, and there was no effect of familiarity for mixed lists. These results are consistent with the predictions of the Search of Associative Memory (SAM) model.
SAR target recognition and posture estimation using spatial pyramid pooling within CNN
NASA Astrophysics Data System (ADS)
Peng, Lijiang; Liu, Xiaohua; Liu, Ming; Dong, Liquan; Hui, Mei; Zhao, Yuejin
2018-01-01
Many convolution neural networks(CNN) architectures have been proposed to strengthen the performance on synthetic aperture radar automatic target recognition (SAR-ATR) and obtained state-of-art results on targets classification on MSTAR database, but few methods concern about the estimation of depression angle and azimuth angle of targets. To get better effect on learning representation of hierarchies of features on both 10-class target classification task and target posture estimation tasks, we propose a new CNN architecture with spatial pyramid pooling(SPP) which can build high hierarchy of features map by dividing the convolved feature maps from finer to coarser levels to aggregate local features of SAR images. Experimental results on MSTAR database show that the proposed architecture can get high recognition accuracy as 99.57% on 10-class target classification task as the most current state-of-art methods, and also get excellent performance on target posture estimation tasks which pays attention to depression angle variety and azimuth angle variety. What's more, the results inspire us the application of deep learning on SAR target posture description.
Parallel processing considerations for image recognition tasks
NASA Astrophysics Data System (ADS)
Simske, Steven J.
2011-01-01
Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.
Music Recognition in Frontotemporal Lobar Degeneration and Alzheimer Disease
Johnson, Julene K; Chang, Chiung-Chih; Brambati, Simona M; Migliaccio, Raffaella; Gorno-Tempini, Maria Luisa; Miller, Bruce L; Janata, Petr
2013-01-01
Objective To compare music recognition in patients with frontotemporal dementia, semantic dementia, Alzheimer disease, and controls and to evaluate the relationship between music recognition and brain volume. Background Recognition of familiar music depends on several levels of processing. There are few studies about how patients with dementia recognize familiar music. Methods Subjects were administered tasks that assess pitch and melody discrimination, detection of pitch errors in familiar melodies, and naming of familiar melodies. Results There were no group differences on pitch and melody discrimination tasks. However, patients with semantic dementia had considerable difficulty naming familiar melodies and also scored the lowest when asked to identify pitch errors in the same melodies. Naming familiar melodies, but not other music tasks, was strongly related to measures of semantic memory. Voxel-based morphometry analysis of brain MRI showed that difficulty in naming songs was associated with the bilateral temporal lobes and inferior frontal gyrus, whereas difficulty in identifying pitch errors in familiar melodies correlated with primarily the right temporal lobe. Conclusions The results support a view that the anterior temporal lobes play a role in familiar melody recognition, and that musical functions are affected differentially across forms of dementia. PMID:21617528
Poor phonemic discrimination does not underlie poor verbal short-term memory in Down syndrome.
Purser, Harry R M; Jarrold, Christopher
2013-05-01
Individuals with Down syndrome tend to have a marked impairment of verbal short-term memory. The chief aim of this study was to investigate whether phonemic discrimination contributes to this deficit. The secondary aim was to investigate whether phonological representations are degraded in verbal short-term memory in people with Down syndrome relative to control participants. To answer these questions, two tasks were used: a discrimination task, in which memory load was as low as possible, and a short-term recognition task that used the same stimulus items. Individuals with Down syndrome were found to perform significantly better than a nonverbal-matched typically developing group on the discrimination task, but they performed significantly more poorly than that group on the recognition task. The Down syndrome group was outperformed by an additional vocabulary-matched control group on the discrimination task but was outperformed to a markedly greater extent on the recognition task. Taken together, the results strongly indicate that phonemic discrimination ability is not central to the verbal short-term memory deficit associated with Down syndrome. Copyright © 2013 Elsevier Inc. All rights reserved.
Interference with olfactory memory by visual and verbal tasks.
Annett, J M; Cook, N M; Leslie, J C
1995-06-01
It has been claimed that olfactory memory is distinct from memory in other modalities. This study investigated the effectiveness of visual and verbal tasks in interfering with olfactory memory and included methodological changes from other recent studies. Subjects were allocated to one of four experimental conditions involving interference tasks [no interference task; visual task; verbal task; visual-plus-verbal task] and presented 15 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Recognition and recall performance both showed effects of interference of visual and verbal tasks but there was no effect for time of testing. While the results may be accommodated within a dual coding framework, further work is indicated to resolve theoretical issues relating to task complexity.
Comparing source-based and gist-based false recognition in aging and Alzheimer's disease.
Pierce, Benton H; Sullivan, Alison L; Schacter, Daniel L; Budson, Andrew E
2005-07-01
This study examined 2 factors contributing to false recognition of semantic associates: errors based on confusion of source and errors based on general similarity information or gist. The authors investigated these errors in patients with Alzheimer's disease (AD), age-matched control participants, and younger adults, focusing on each group's ability to use recollection of source information to suppress false recognition. The authors used a paradigm consisting of both deep and shallow incidental encoding tasks, followed by study of a series of categorized lists in which several typical exemplars were omitted. Results showed that healthy older adults were able to use recollection from the deep processing task to some extent but less than that used by younger adults. In contrast, false recognition in AD patients actually increased following the deep processing task, suggesting that they were unable to use recollection to oppose familiarity arising from incidental presentation. (c) 2005 APA, all rights reserved.
The adaptive use of recognition in group decision making.
Kämmer, Juliane E; Gaissmaier, Wolfgang; Reimer, Torsten; Schermuly, Carsten C
2014-06-01
Applying the framework of ecological rationality, the authors studied the adaptivity of group decision making. In detail, they investigated whether groups apply decision strategies conditional on their composition in terms of task-relevant features. The authors focused on the recognition heuristic, so the task-relevant features were the validity of the group members' recognition and knowledge, which influenced the potential performance of group strategies. Forty-three three-member groups performed an inference task in which they had to infer which of two German companies had the higher market capitalization. Results based on the choice data support the hypothesis that groups adaptively apply the strategy that leads to the highest theoretically achievable performance. Time constraints had no effect on strategy use but did have an effect on the proportions of different types of arguments. Possible mechanisms underlying the adaptive use of recognition in group decision making are discussed. © 2014 Cognitive Science Society, Inc.
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Makeyev, Oleksandr; Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Ed; Neuman, Michael
2007-01-01
In this paper we propose a sound recognition technique based on the limited receptive area (LIRA) neural classifier and continuous wavelet transform (CWT). LIRA neural classifier was developed as a multipurpose image recognition system. Previous tests of LIRA demonstrated good results in different image recognition tasks including: handwritten digit recognition, face recognition, metal surface texture recognition, and micro work piece shape recognition. We propose a sound recognition technique where scalograms of sound instances serve as inputs of the LIRA neural classifier. The methodology was tested in recognition of swallowing sounds. Swallowing sound recognition may be employed in systems for automated swallowing assessment and diagnosis of swallowing disorders. The experimental results suggest high efficiency and reliability of the proposed approach.
The locus of word frequency effects in skilled spelling-to-dictation.
Chua, Shi Min; Liow, Susan J Rickard
2014-01-01
In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.
Eye movements during object recognition in visual agnosia.
Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe
2012-07-01
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.
Novelty preference in patients with developmental amnesia.
Munoz, M; Chadwick, M; Perez-Hernandez, E; Vargha-Khadem, F; Mishkin, M
2011-12-01
To re-examine whether or not selective hippocampal damage reduces novelty preference in visual paired comparison (VPC), we presented two different versions of the task to a group of patients with developmental amnesia (DA), each of whom sustained this form of pathology early in life. Compared with normal control participants, the DA group showed a delay-dependent reduction in novelty preference on one version of the task and an overall reduction on both versions combined. Because VPC is widely considered to be a measure of incidental recognition, the results appear to support the view that the hippocampus contributes to recognition memory. A difficulty for this conclusion, however, is that according to one current view the hippocampal contribution to recognition is limited to task conditions that encourage recollection of an item in some associated context, and according to another current view, to recognition of an item with the high confidence judgment that reflects a strong memory. By contrast, VPC, throughout which the participant remains entirely uninstructed other than to view the stimuli, would seem to lack such task conditions and so would likely lead to recognition based on familiarity rather than recollection or, alternatively, weak memories rather than strong. However, before concluding that the VPC impairment therefore contradicts both current views regarding the role of the hippocampus in recognition memory, two possibilities that would resolve this issue need to be investigated. One is that some variable in VPC, such as the extended period of stimulus encoding during familiarization, overrides its incidental nature, and, because this condition promotes either recollection- or strength-based recognition, renders the task hippocampal-dependent. The other possibility is that VPC, rather than providing a measure of incidental recognition, actually assesses an implicit, information-gathering process modulated by habituation, for which the hippocampus is also partly responsible, independent of its role in recognition. Copyright © 2010 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Simpson, C. A.
1985-01-01
In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.
Post-Training Reversible Inactivation of the Hippocampus Enhances Novel Object Recognition Memory
ERIC Educational Resources Information Center
Oliveira, Ana M. M.; Hawk, Joshua D.; Abel, Ted; Havekes, Robbert
2010-01-01
Research on the role of the hippocampus in object recognition memory has produced conflicting results. Previous studies have used permanent hippocampal lesions to assess the requirement for the hippocampus in the object recognition task. However, permanent hippocampal lesions may impact performance through effects on processes besides memory…
Christie, Lori-Ann; Saunders, Richard C.; Kowalska, Danuta, M.; MacKay, William A.; Head, Elizabeth; Cotman, Carl W.; Milgram, Norton W.
2014-01-01
To examine the effects of rhinal and dorsolateral prefrontal cortex lesions on object and spatial recognition memory in canines, we used a protocol in which both an object (delayed non-matching to sample, or DNMS) and a spatial (delayed non-matching to position or DNMP) recognition task were administered daily. The tasks used similar procedures such that only the type of stimulus information to be remembered differed. Rhinal cortex (RC) lesions produced a selective deficit on the DNMS task, both in retention of the task rules at short delays and in object recognition memory. By contrast, performance on the DNMP task remained intact at both short and long delay intervals in RC animals. Subjects who received dorsolateral prefrontal cortex (dlPFC) lesions were impaired on a spatial task at a short, 5-sec delay, suggesting disrupted retention of the general task rules, however, this impairment was transient; long-term spatial memory performance was unaffected in dlPFC subjects. The present results provide support for the involvement of the RC in object, but not visuospatial, processing and recognition memory, whereas the dlPFC appears to mediate retention of a non-matching rule. These findings support theories of functional specialization within the medial temporal lobe and frontal cortex and suggest that rhinal and dorsolateral prefrontal cortices in canines are functionally similar to analogous regions in other mammals. PMID:18792072
Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka
2014-01-01
Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.
2014-01-01
Myoelectric control has been used for decades to control powered upper limb prostheses. Conventional, amplitude-based control has been employed to control a single prosthesis degree of freedom (DOF) such as closing and opening of the hand. Within the last decade, new and advanced arm and hand prostheses have been constructed that are capable of actuating numerous DOFs. Pattern recognition control has been proposed to control a greater number of DOFs than conventional control, but has traditionally been limited to sequentially controlling DOFs one at a time. However, able-bodied individuals use multiple DOFs simultaneously, and it may be beneficial to provide amputees the ability to perform simultaneous movements. In this study, four amputees who had undergone targeted motor reinnervation (TMR) surgery with previous training using myoelectric prostheses were configured to use three control strategies: 1) conventional amplitude-based myoelectric control, 2) sequential (one-DOF) pattern recognition control, 3) simultaneous pattern recognition control. Simultaneous pattern recognition was enabled by having amputees train each simultaneous movement as a separate motion class. For tasks that required control over just one DOF, sequential pattern recognition based control performed the best with the lowest average completion times, completion rates and length error. For tasks that required control over 2 DOFs, the simultaneous pattern recognition controller performed the best with the lowest average completion times, completion rates and length error compared to the other control strategies. In the two strategies in which users could employ simultaneous movements (conventional and simultaneous pattern recognition), amputees chose to use simultaneous movements 78% of the time with simultaneous pattern recognition and 64% of the time with conventional control for tasks that required two DOF motions to reach the target. These results suggest that when amputees are given the ability to control multiple DOFs simultaneously, they choose to perform tasks that utilize multiple DOFs with simultaneous movements. Additionally, they were able to perform these tasks with higher performance (faster speed, lower length error and higher completion rates) without losing substantial performance in 1 DOF tasks. PMID:24410948
The impact of task demand on visual word recognition.
Yang, J; Zevin, J
2014-07-11
The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Segment-based acoustic models for continuous speech recognition
NASA Astrophysics Data System (ADS)
Ostendorf, Mari; Rohlicek, J. R.
1993-07-01
This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.
Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.
Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth
2016-03-01
Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.
Warmth of familiarity and chill of error: affective consequences of recognition decisions.
Chetverikov, Andrey
2014-04-01
The present research aimed to assess the effect of recognition decision on subsequent affective evaluations of recognised and non-recognised objects. Consistent with the proposed account of post-decisional preferences, results showed that the effect of recognition on preferences depends upon objective familiarity. If stimuli are recognised, liking ratings are positively associated with exposure frequency; if stimuli are not recognised, this link is either absent (Experiment 1) or negative (Experiments 2 and 3). This interaction between familiarity and recognition exists even when recognition accuracy is at chance level and the "mere exposure" effect is absent. Finally, data obtained from repeated measurements of preferences and using manipulations of task order confirm that recognition decisions have a causal influence on preferences. The findings suggest that affective evaluation can provide fine-grained access to the efficacy of cognitive processing even in simple cognitive tasks.
The involvement of emotion recognition in affective theory of mind.
Mier, Daniela; Lis, Stefanie; Neuthe, Kerstin; Sauer, Carina; Esslinger, Christine; Gallhofer, Bernd; Kirsch, Peter
2010-11-01
This study was conducted to explore the relationship between emotion recognition and affective Theory of Mind (ToM). Forty subjects performed a facial emotion recognition and an emotional intention recognition task (affective ToM) in an event-related fMRI study. Conjunction analysis revealed overlapping activation during both tasks. Activation in some of these conjunctly activated regions was even stronger during affective ToM than during emotion recognition, namely in the inferior frontal gyrus, the superior temporal sulcus, the temporal pole, and the amygdala. In contrast to previous studies investigating ToM, we found no activation in the anterior cingulate, commonly assumed as the key region for ToM. The results point to a close relationship of emotion recognition and affective ToM and can be interpreted as evidence for the assumption that at least basal forms of ToM occur by an embodied, non-cognitive process. Copyright © 2010 Society for Psychophysiological Research.
Automated road marking recognition system
NASA Astrophysics Data System (ADS)
Ziyatdinov, R. R.; Shigabiev, R. R.; Talipov, D. N.
2017-09-01
Development of the automated road marking recognition systems in existing and future vehicles control systems is an urgent task. One way to implement such systems is the use of neural networks. To test the possibility of using neural network software has been developed with the use of a single-layer perceptron. The resulting system based on neural network has successfully coped with the task both when driving in the daytime and at night.
Object Recognition Memory and the Rodent Hippocampus
ERIC Educational Resources Information Center
Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.
2010-01-01
In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…
Face-Name Association Learning and Brain Structural Substrates in Alcoholism
Pitel, Anne-Lise; Chanraud, Sandra; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V.
2011-01-01
Background Associative learning is required for face-name association and is impaired in alcoholism, but the cognitive processes and brain structural components underlying this deficit remain unclear. It is also unknown whether prompting alcoholics to implement a deep level of processing during face-name encoding would enhance performance. Methods Abstinent alcoholics and controls performed a levels-of-processing face-name learning task. Participants indicated whether the face was that of an honest person (deep encoding) or that of a man (shallow encoding). Retrieval was examined using an associative (face-name) recognition task and a single-item (face or name only) recognition task. Participants also underwent a 3T structural MRI. Results Compared with controls, alcoholics had poorer associative and single-item recognition, each impaired to the same extent. Level of processing at encoding had little effect on recognition performance but affected reaction time. Correlations with brain volumes were generally modest and based primarily on reaction time in alcoholics, where the deeper the processing at encoding, the more restricted the correlations with brain volumes. In alcoholics, longer control task reaction times correlated modestly with volumes across several anterior to posterior brain regions; shallow encoding correlated with calcarine and striatal volumes; deep encoding correlated with precuneus and parietal volumes; associative recognition RT correlated with cerebellar volumes. In controls, poorer associative recognition with deep encoding correlated significantly with smaller volumes of frontal and striatal structures. Conclusions Despite prompting, alcoholics did not take advantage of encoding memoranda at a deep level to enhance face-name recognition accuracy. Nonetheless, conditions of deeper encoding resulted in faster reaction times and more specific relations with regional brain volumes than did shallow encoding. The normal relation between associative recognition and corticostriatal volumes was not present in alcoholics. Rather, their speeded reaction time occurred at the expense of accuracy and was related most robustly to cerebellar volumes. PMID:22509954
HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.
Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B
2017-07-01
This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.
Does the generation effect occur for pictures?
Kinjo, H; Snodgrass, J G
2000-01-01
The generation effect is the finding that self-generated stimuli are recalled and recognized better than read stimuli. The effect has been demonstrated primarily with words. This article examines the effect for pictures in two experiments: Subjects named complete pictures (name condition) and fragmented pictures (generation condition). In Experiment 1, memory was tested in 3 explicit tasks: free recall, yes/no recognition, and a source-monitoring task on whether each picture was complete or fragmented (the complete/incomplete task). The generation effect was found for all 3 tasks. However, in the recognition and source-monitoring tasks, the generation effect was observed only in the generation condition. We hypothesized that absence of the effect in the name condition was due to the sensory or process match effect between study and test pictures and the superior identification of pictures in the name condition. Therefore, stimuli were changed from pictures to their names in Experiment 2. Memory was tested in the recognition task, complete/incomplete task, and second source-monitoring task (success/failure) on whether each picture had been identified successfully. The generation effect was observed for all 3 tasks. These results suggest that memory of structural and semantic characteristics and of success in identification of generated pictures may contribute to the generation effect.
Effects of visual and verbal interference tasks on olfactory memory: the role of task complexity.
Annett, J M; Leslie, J C
1996-08-01
Recent studies have demonstrated that visual and verbal suppression tasks interfere with olfactory memory in a manner which is partially consistent with a dual coding interpretation. However, it has been suggested that total task complexity rather than modality specificity of the suppression tasks might account for the observed pattern of results. This study addressed the issue of whether or not the level of difficulty and complexity of suppression tasks could explain the apparent modality effects noted in earlier experiments. A total of 608 participants were each allocated to one of 19 experimental conditions involving interference tasks which varied suppression type (visual or verbal), nature of complexity (single, double or mixed) and level of difficulty (easy, optimal or difficult) and presented with 13 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Both recognition and recall performance showed an overall effect for suppression nature, suppression level and time of testing with no effect for suppression type. The results lend only limited support to Paivio's (1986) dual coding theory, but have a number of characteristics which suggest that an adequate account of olfactory memory may be broadly similar to current theories of face and object recognition. All of these phenomena might be dealt with by an appropriately modified version of dual coding theory.
Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition
NASA Astrophysics Data System (ADS)
Yin, Xi; Liu, Xiaoming
2018-02-01
This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.
The Costs and Benefits of Testing and Guessing on Recognition Memory
ERIC Educational Resources Information Center
Huff, Mark J.; Balota, David A.; Hutchison, Keith A.
2016-01-01
We examined whether 2 types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler…
Rapid Naming Speed and Chinese Character Recognition
ERIC Educational Resources Information Center
Liao, Chen-Huei; Georgiou, George K.; Parrila, Rauno
2008-01-01
We examined the relationship between rapid naming speed (RAN) and Chinese character recognition accuracy and fluency. Sixty-three grade 2 and 54 grade 4 Taiwanese children were administered four RAN tasks (colors, digits, Zhu-Yin-Fu-Hao, characters), and two character recognition tasks. RAN tasks accounted for more reading variance in grade 4 than…
ERIC Educational Resources Information Center
Parks, Colleen M.
2013-01-01
Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which…
"We all look the same to me": positive emotions eliminate the own-race in face recognition.
Johnson, Kareem J; Fredrickson, Barbara L
2005-11-01
Extrapolating from the broaden-and-build theory, we hypothesized that positive emotion may reduce the own-race bias in facial recognition. In Experiments 1 and 2, Caucasian participants (N = 89) viewed Black and White faces for a recognition task. They viewed videos eliciting joy, fear, or neutrality before the learning (Experiment 1) or testing (Experiment 2) stages of the task. Results reliably supported the hypothesis. Relative to fear or a neutral state, joy experienced before either stage improved recognition of Black faces and significantly reduced the own-race bias. Discussion centers on possible mechanisms for this reduction of the own-race bias, including improvements in holistic processing and promotion of a common in-group identity due to positive emotions.
Chemical Entity Recognition and Resolution to ChEBI
Grego, Tiago; Pesquita, Catia; Bastos, Hugo P.; Couto, Francisco M.
2012-01-01
Chemical entities are ubiquitous through the biomedical literature and the development of text-mining systems that can efficiently identify those entities are required. Due to the lack of available corpora and data resources, the community has focused its efforts in the development of gene and protein named entity recognition systems, but with the release of ChEBI and the availability of an annotated corpus, this task can be addressed. We developed a machine-learning-based method for chemical entity recognition and a lexical-similarity-based method for chemical entity resolution and compared them with Whatizit, a popular-dictionary-based method. Our methods outperformed the dictionary-based method in all tasks, yielding an improvement in F-measure of 20% for the entity recognition task, 2–5% for the entity-resolution task, and 15% for combined entity recognition and resolution tasks. PMID:25937941
Recognizing Dynamic Faces in Malaysian Chinese Participants.
Tan, Chrystalle B Y; Sheppard, Elizabeth; Stephen, Ian D
2016-03-01
High performance level in face recognition studies does not seem to be replicable in real-life situations possibly because of the artificial nature of laboratory studies. Recognizing faces in natural social situations may be a more challenging task, as it involves constant examination of dynamic facial motions that may alter facial structure vital to the recognition of unfamiliar faces. Because of the incongruences of recognition performance, the current study developed stimuli that closely represent natural social situations to yield results that more accurately reflect observers' performance in real-life settings. Naturalistic stimuli of African, East Asian, and Western Caucasian actors introducing themselves were presented to investigate Malaysian Chinese participants' recognition sensitivity and looking strategies when performing a face recognition task. When perceiving dynamic facial stimuli, participants fixated most on the nose, followed by the mouth then the eyes. Focusing on the nose may have enabled participants to gain a more holistic view of actors' facial and head movements, which proved to be beneficial in recognizing identities. Participants recognized all three races of faces equally well. The current results, which differed from a previous static face recognition study, may be a more accurate reflection of observers' recognition abilities and looking strategies. © The Author(s) 2015.
How Chinese Semantics Capability Improves Interpretation in Visual Communication
ERIC Educational Resources Information Center
Cheng, Chu-Yu; Ou, Yang-Kun; Kin, Ching-Lung
2017-01-01
A visual representation involves delivering messages through visually communicated images. The study assumed that semantic recognition can affect visual interpretation ability, and the result showed that students graduating from a general high school achieve satisfactory results in semantic recognition and image interpretation tasks than students…
Attentional biases and memory for emotional stimuli in men and male rhesus monkeys.
Lacreuse, Agnès; Schatz, Kelly; Strazzullo, Sarah; King, Hanna M; Ready, Rebecca
2013-11-01
We examined attentional biases for social and non-social emotional stimuli in young adult men and compared the results to those of male rhesus monkeys (Macaca mulatta) previously tested in a similar dot-probe task (King et al. in Psychoneuroendocrinology 37(3):396-409, 2012). Recognition memory for these stimuli was also analyzed in each species, using a recognition memory task in humans and a delayed non-matching-to-sample task in monkeys. We found that both humans and monkeys displayed a similar pattern of attentional biases toward threatening facial expressions of conspecifics. The bias was significant in monkeys and of marginal significance in humans. In addition, humans, but not monkeys, exhibited an attentional bias away from negative non-social images. Attentional biases for social and non-social threat differed significantly, with both species showing a pattern of vigilance toward negative social images and avoidance of negative non-social images. Positive stimuli did not elicit significant attentional biases for either species. In humans, emotional content facilitated the recognition of non-social images, but no effect of emotion was found for the recognition of social images. Recognition accuracy was not affected by emotion in monkeys, but response times were faster for negative relative to positive images. Altogether, these results suggest shared mechanisms of social attention in humans and monkeys, with both species showing a pattern of selective attention toward threatening faces of conspecifics. These data are consistent with the view that selective vigilance to social threat is the result of evolutionary constraints. Yet, selective attention to threat was weaker in humans than in monkeys, suggesting that regulatory mechanisms enable non-anxious humans to reduce sensitivity to social threat in this paradigm, likely through enhanced prefrontal control and reduced amygdala activation. In addition, the findings emphasize important differences in attentional biases to social versus non-social threat in both species. Differences in the impact of emotional stimuli on recognition memory between monkeys and humans will require further study, as methodological differences in the recognition tasks may have affected the results.
Voice tracking and spoken word recognition in the presence of other voices
NASA Astrophysics Data System (ADS)
Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar
2004-12-01
We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.
Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan
2013-06-07
Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.
MacPherson, Sarah E; Turner, Martha S; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim
2016-03-01
Memory deficits in patients with frontal lobe lesions are most apparent on free recall tasks that require the selection, initiation, and implementation of retrieval strategies. The effect of frontal lesions on recognition memory performance is less clear with some studies reporting recognition memory impairments but others not. The majority of these studies do not directly compare recall and recognition within the same group of frontal patients, assessing only recall or recognition memory performance. Other studies that do compare recall and recognition in the same frontal group do not consider recall or recognition tests that are comparable for difficulty. Recognition memory impairments may not be reported because recognition memory tasks are less demanding. This study aimed to investigate recall and recognition impairments in the same group of 47 frontal patients and 78 healthy controls. The Doors and People Test was administered as a neuropsychological test of memory as it assesses both verbal and visual recall and recognition using subtests that are matched for difficulty. Significant verbal and visual recall and recognition impairments were found in the frontal patients. These results demonstrate that when frontal patients are assessed on recall and recognition memory tests of comparable difficulty, memory impairments are found on both types of episodic memory test. (c) 2016 APA, all rights reserved).
Koelkebeck, Katja; Kohl, Waldemar; Luettgenau, Julia; Triantafillou, Susanna; Ohrmann, Patricia; Satoh, Shinji; Minoshita, Seiko
2015-07-30
A novel emotion recognition task that employs photos of a Japanese mask representing a highly ambiguous stimulus was evaluated. As non-Asians perceive and/or label emotions differently from Asians, we aimed to identify patterns of task-performance in non-Asian healthy volunteers with a view to future patient studies. The Noh mask test was presented to 42 adult German participants. Reaction times and emotion attribution patterns were recorded. To control for emotion identification abilities, a standard emotion recognition task was used among others. Questionnaires assessed personality traits. Finally, results were compared to age- and gender-matched Japanese volunteers. Compared to other tasks, German participants displayed slowest reaction times on the Noh mask test, indicating higher demands of ambiguous emotion recognition. They assigned more positive emotions to the mask than Japanese volunteers, demonstrating culture-dependent emotion identification patterns. As alexithymic and anxious traits were associated with slower reaction times, personality dimensions impacted on performance, as well. We showed an advantage of ambiguous over conventional emotion recognition tasks. Moreover, we determined emotion identification patterns in Western individuals impacted by personality dimensions, suggesting performance differences in clinical samples. Due to its properties, the Noh mask test represents a promising tool in the differential diagnosis of psychiatric disorders, e.g. schizophrenia. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
Recognition and reading aloud of kana and kanji word: an fMRI study.
Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao
2009-03-16
It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.
Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO
Tamati, Terrin N.; Pisoni, David B.
2015-01-01
Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life. PMID:25405842
Cognition and speech-in-noise recognition: the role of proactive interference.
Ellis, Rachel J; Rönnberg, Jerker
2014-01-01
Complex working memory (WM) span tasks have been shown to predict speech-in-noise (SIN) recognition. Studies of complex WM span tasks suggest that, rather than indexing a single cognitive process, performance on such tasks may be governed by separate cognitive subprocesses embedded within WM. Previous research has suggested that one such subprocess indexed by WM tasks is proactive interference (PI), which refers to difficulties memorizing current information because of interference from previously stored long-term memory representations for similar information. The aim of the present study was to investigate phonological PI and to examine the relationship between PI (semantic and phonological) and SIN perception. A within-subjects experimental design was used. An opportunity sample of 24 young listeners with normal hearing was recruited. Measures of resistance to, and release from, semantic and phonological PI were calculated alongside the signal-to-noise ratio required to identify 50% of keywords correctly in a SIN recognition task. The data were analyzed using t-tests and correlations. Evidence of release from and resistance to semantic interference was observed. These measures correlated significantly with SIN recognition. Limited evidence of phonological PI was observed. The results show that capacity to resist semantic PI can be used to predict SIN recognition scores in young listeners with normal hearing. On the basis of these findings, future research will focus on investigating whether tests of PI can be used in the treatment and/or rehabilitation of hearing loss. American Academy of Audiology.
Speaker recognition with temporal cues in acoustic and electric hearing
NASA Astrophysics Data System (ADS)
Vongphoe, Michael; Zeng, Fan-Gang
2005-08-01
Natural spoken language processing includes not only speech recognition but also identification of the speaker's gender, age, emotional, and social status. Our purpose in this study is to evaluate whether temporal cues are sufficient to support both speech and speaker recognition. Ten cochlear-implant and six normal-hearing subjects were presented with vowel tokens spoken by three men, three women, two boys, and two girls. In one condition, the subject was asked to recognize the vowel. In the other condition, the subject was asked to identify the speaker. Extensive training was provided for the speaker recognition task. Normal-hearing subjects achieved nearly perfect performance in both tasks. Cochlear-implant subjects achieved good performance in vowel recognition but poor performance in speaker recognition. The level of the cochlear implant performance was functionally equivalent to normal performance with eight spectral bands for vowel recognition but only to one band for speaker recognition. These results show a disassociation between speech and speaker recognition with primarily temporal cues, highlighting the limitation of current speech processing strategies in cochlear implants. Several methods, including explicit encoding of fundamental frequency and frequency modulation, are proposed to improve speaker recognition for current cochlear implant users.
The role of color information on object recognition: a review and meta-analysis.
Bramão, Inês; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís
2011-09-01
In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d=0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d=0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d=0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d=0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Clark, Steven E.; Abbe, Allison; Larson, Rakel P.
2006-01-01
S. E. Clark, A. Hori, A. Putnam, and T. J. Martin (2000) showed that collaboration on a recognition memory task produced facilitation in recognition of targets but had inconsistent and sometimes negative effects regarding distractors. They accounted for these results within the framework of a dual-process, recall-plus-familiarity model but…
ERIC Educational Resources Information Center
Olszewska, Justyna M.; Reuter-Lorenz, Patricia A.; Munier, Emily; Bendler, Sara A.
2015-01-01
False working memories readily emerge using a visual item-recognition variant of the converging associates task. Two experiments, manipulating study and test modality, extended prior working memory results by demonstrating a reliable false recognition effect (more false alarms to associatively related lures than to unrelated lures) within seconds…
Transfer between Pose and Illumination Training in Face Recognition
ERIC Educational Resources Information Center
Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie
2009-01-01
The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…
The memory state heuristic: A formal model based on repeated recognition judgments.
Castela, Marta; Erdfelder, Edgar
2017-02-01
The recognition heuristic (RH) theory predicts that, in comparative judgment tasks, if one object is recognized and the other is not, the recognized one is chosen. The memory-state heuristic (MSH) extends the RH by assuming that choices are not affected by recognition judgments per se, but by the memory states underlying these judgments (i.e., recognition certainty, uncertainty, or rejection certainty). Specifically, the larger the discrepancy between memory states, the larger the probability of choosing the object in the higher state. The typical RH paradigm does not allow estimation of the underlying memory states because it is unknown whether the objects were previously experienced or not. Therefore, we extended the paradigm by repeating the recognition task twice. In line with high threshold models of recognition, we assumed that inconsistent recognition judgments result from uncertainty whereas consistent judgments most likely result from memory certainty. In Experiment 1, we fitted 2 nested multinomial models to the data: an MSH model that formalizes the relation between memory states and binary choices explicitly and an approximate model that ignores the (unlikely) possibility of consistent guesses. Both models provided converging results. As predicted, reliance on recognition increased with the discrepancy in the underlying memory states. In Experiment 2, we replicated these results and found support for choice consistency predictions of the MSH. Additionally, recognition and choice latencies were in agreement with the MSH in both experiments. Finally, we validated critical parameters of our MSH model through a cross-validation method and a third experiment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Fleming, Stephen A; Dilger, Ryan N
2017-03-15
Novelty preference paradigms have been widely used to study recognition memory and its neural substrates. The piglet model continues to advance the study of neurodevelopment, and as such, tasks that use novelty preference will serve especially useful due to their translatable nature to humans. However, there has been little use of this behavioral paradigm in the pig, and previous studies using the novel object recognition paradigm in piglets have yielded inconsistent results. The current study was conducted to determine if piglets were capable of displaying a novelty preference. Herein a series of experiments were conducted using novel object recognition or location in 3- and 4-week-old piglets. In the novel object recognition task, piglets were able to discriminate between novel and sample objects after delays of 2min, 1h, 1 day, and 2 days (all P<0.039) at both ages. Performance was sex-dependent, as females could perform both 1- and 2-day delays (P<0.036) and males could perform the 2-day delay (P=0.008) but not the 1-day delay (P=0.347). Furthermore, 4-week-old piglets and females tended to exhibit greater exploratory behavior compared with males. Such performance did not extend to novel location recognition tasks, as piglets were only able to discriminate between novel and sample locations after a short delay (P>0.046). In conclusion, this study determined that piglets are able to perform the novel object and location recognition tasks at 3-to-4 weeks of age, however performance was dependent on sex, age, and delay. Copyright © 2016 Elsevier B.V. All rights reserved.
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Robust relationship between reading span and speech recognition in noise
Souza, Pamela; Arehart, Kathryn
2015-01-01
Objective Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. Design The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. Study sample The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Results Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Conclusions Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition. PMID:25975360
Perspective taking in older age revisited: a motivational perspective.
Zhang, Xin; Fung, Helene H; Stanley, Jennifer T; Isaacowitz, Derek M; Ho, Man Yee
2013-10-01
How perspective-taking ability changes with age (i.e., whether older adults are better at understanding others' behaviors and intentions and show greater empathy to others or not) is not clear, with prior empirical findings on this phenomenon yielding mixed results. In a series of experiments, we investigated the phenomenon from a motivational perspective. Perceived closeness between participants and the experimenter (Study 1) or the target in an emotion recognition task (Study 2) was manipulated to examine whether the closeness could influence participants' performance in faux pas recognition (Study 1) and emotion recognition (Study 2). It was found that the well-documented negative age effect (i.e., older adults performed worse than younger adults in faux pas and emotion recognition tasks) was only replicated in the control condition for both tasks. When closeness was experimentally increased, older adults enhanced their performance, and they now performed at a comparable level as younger adults. Findings from the 2 experiments suggest that the reported poorer performance of older adults in perspective-taking tasks might be attributable to a lack of motivation instead of ability to perform in laboratory settings. With the presence of strong motivation, older adults have the ability to perform equally well as younger adults.
ERIC Educational Resources Information Center
Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus
2010-01-01
The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…
Lawson, Rebecca
2004-10-01
In two experiments, the identification of novel 3-D objects was worse for depth-rotated and mirror-reflected views, compared with the study view in an implicit affective preference memory task, as well as in an explicit recognition memory task. In Experiment 1, recognition was worse and preference was lower when depth-rotated views of an object were paired with an unstudied object relative to trials when the study view of that object was shown. There was a similar trend for mirror-reflected views. In Experiment 2, the study view of an object was both recognized and preferred above chance when it was paired with either depth-rotated or mirror-reflected views of that object. These results suggest that view-sensitive representations of objects mediate performance in implicit, as well as explicit, memory tasks. The findings do not support the claim that separate episodic and structural description representations underlie performance in implicit and explicit memory tasks, respectively.
Chemical entity recognition in patents by combining dictionary-based and statistical approaches
Akhondi, Saber A.; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F.H.; Hettne, Kristina M.; van Mulligen, Erik M.; Kors, Jan A.
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small. Database URL: http://biosemantics.org/chemdner-patents PMID:27141091
Koutstaal, Wilma
2003-03-01
Investigations of memory deficits in older individuals have concentrated on their increased likelihood of forgetting events or details of events that were actually encountered (errors of omission). However, mounting evidence demonstrates that normal cognitive aging also is associated with an increased propensity for errors of commission--shown in false alarms or false recognition. The present study examined the origins of this age difference. Older and younger adults each performed three types of memory tasks in which details of encountered items might influence performance. Although older adults showed greater false recognition of related lures on a standard (identical) old/new episodic recognition task, older and younger adults showed parallel effects of detail on repetition priming and meaning-based episodic recognition (decreased priming and decreased meaning-based recognition for different relative to same exemplars). The results suggest that the older adults encoded details but used them less effectively than the younger adults in the recognition context requiring their deliberate, controlled use.
Gender differences in recognition of toy faces suggest a contribution of experience.
Ryan, Kaitlin F; Gauthier, Isabel
2016-12-01
When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning to measures individual differences with face categories for which men and women may differ in experience, using the faces of Barbie dolls and of Transformers. The results show a crossover interaction between subject gender and category, where men outperform women with Transformers' faces. We demonstrate that men can outperform women with some categories of faces, suggesting that explanations for a general face recognition advantage for women are in fact not needed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Age- and sex-related disturbance in a battery of sensorimotor and cognitive tasks in Kunming mice.
Chen, Gui-Hai; Wang, Yue-Ju; Zhang, Li-Qun; Zhou, Jiang-Ning
2004-12-15
A battery of tasks, i.e. beam walking, open field, tightrope, radial six-arm water maze (RAWM), novel-object recognition and olfactory discrimination, was used to determine whether there was age- and sex-related memory deterioration in Kunming (KM) mice, and whether these tasks are independent or correlated with each other. Two age groups of KM mice were used: a younger group (7-8 months old, 12 males and 11 females) and an older group (17-18 months old, 12 males and 12 females). The results showed that the spatial learning ability and memory in the RAWM were lower in older female KM mice relative to younger female mice and older male mice. Consistent with this, in the novel-object recognition task, a non-spatial cognitive task, older female mice but not older male mice had impairment of short-term memory. In olfactory discrimination, another non-spatial task, the older mice retained this ability. Interestingly, female mice performed better than males, especially in the younger group. The older females exhibited sensorimotor impairment in the tightrope task and low locomotor activity in the open-field task. Moreover, older mice spent a longer time in the peripheral squares of the open-field than younger ones. The non-spatial cognitive performance in the novel-object recognition and olfactory discrimination tasks was related to performance in the open-field, whereas the spatial cognitive performance in the RAWM was not related to performance in any of the three sensorimotor tasks. These results suggest that disturbance of spatial learning and memory, as well as selective impairment of non-spatial learning and memory, existed in older female KM mice.
Child–Adult Differences in Using Dual-Task Paradigms to Measure Listening Effort
Charles, Lauren M.; Ricketts, Todd A.
2017-01-01
Purpose The purpose of the project was to investigate the effects modifying the secondary task in a dual-task paradigm to measure objective listening effort. To be specific, the complexity and depth of processing were increased relative to a simple secondary task. Method Three dual-task paradigms were developed for school-age children. The primary task was word recognition. The secondary task was a physical response to a visual probe (simple task), a physical response to a complex probe (increased complexity), or word categorization (increased depth of processing). Sixteen adults (22–32 years, M = 25.4) and 22 children (9–17 years, M = 13.2) were tested using the 3 paradigms in quiet and noise. Results For both groups, manipulations of the secondary task did not affect word recognition performance. For adults, increasing depth of processing increased the calculated effect of noise; however, for children, results with the deep secondary task were the least stable. Conclusions Manipulations of the secondary task differentially affected adults and children. Consistent with previous findings, increased depth of processing enhanced paradigm sensitivity for adults. However, younger participants were more likely to demonstrate the expected effects of noise on listening effort using a secondary task that did not require deep processing. PMID:28346816
Optimizing estimation of hemispheric dominance for language using magnetic source imaging
Passaro, Antony D.; Rezaie, Roozbeh; Moser, Dana C.; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C.
2011-01-01
The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10–18 Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10–18 Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. PMID:21890118
Golan, Ofer; Baron-Cohen, Simon; Hill, Jacqueline
2006-02-01
Adults with Asperger Syndrome (AS) can recognise simple emotions and pass basic theory of mind tasks, but have difficulties recognising more complex emotions and mental states. This study describes a new battery of tasks, testing recognition of 20 complex emotions and mental states from faces and voices. The battery was given to males and females with AS and matched controls. Results showed the AS group performed worse than controls overall, on emotion recognition from faces and voices and on 12/20 specific emotions. Females recognised faces better than males regardless of diagnosis, and males with AS had more difficulties recognising emotions from faces than from voices. The implications of these results are discussed in relation to social functioning in AS.
Deficits in facial affect recognition among antisocial populations: a meta-analysis.
Marsh, Abigail A; Blair, R J R
2008-01-01
Individuals with disorders marked by antisocial behavior frequently show deficits in recognizing displays of facial affect. Antisociality may be associated with specific deficits in identifying fearful expressions, which would implicate dysfunction in neural structures that subserve fearful expression processing. A meta-analysis of 20 studies was conducted to assess: (a) if antisocial populations show any consistent deficits in recognizing six emotional expressions; (b) beyond any generalized impairment, whether specific fear recognition deficits are apparent; and (c) if deficits in fear recognition are a function of task difficulty. Results show a robust link between antisocial behavior and specific deficits in recognizing fearful expressions. This impairment cannot be attributed solely to task difficulty. These results suggest dysfunction among antisocial individuals in specified neural substrates, namely the amygdala, involved in processing fearful facial affect.
Visual Word Recognition Across the Adult Lifespan
Cohen-Shikora, Emily R.; Balota, David A.
2016-01-01
The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629
Transfer Learning with Convolutional Neural Networks for SAR Ship Recognition
NASA Astrophysics Data System (ADS)
Zhang, Di; Liu, Jia; Heng, Wang; Ren, Kaijun; Song, Junqiang
2018-03-01
Ship recognition is the backbone of marine surveillance systems. Recent deep learning methods, e.g. Convolutional Neural Networks (CNNs), have shown high performance for optical images. Learning CNNs, however, requires a number of annotated samples to estimate numerous model parameters, which prevents its application to Synthetic Aperture Radar (SAR) images due to the limited annotated training samples. Transfer learning has been a promising technique for applications with limited data. To this end, a novel SAR ship recognition method based on CNNs with transfer learning has been developed. In this work, we firstly start with a CNNs model that has been trained in advance on Moving and Stationary Target Acquisition and Recognition (MSTAR) database. Next, based on the knowledge gained from this image recognition task, we fine-tune the CNNs on a new task to recognize three types of ships in the OpenSARShip database. The experimental results show that our proposed approach can obviously increase the recognition rate comparing with the result of merely applying CNNs. In addition, compared to existing methods, the proposed method proves to be very competitive and can learn discriminative features directly from training data instead of requiring pre-specification or pre-selection manually.
[Learning virtual routes: what does verbal coding do in working memory?].
Gyselinck, Valérie; Grison, Élise; Gras, Doriane
2015-03-01
Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.
Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio
2015-01-01
The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.
Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions
Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio
2015-01-01
The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890
Cippitelli, Andrea; Zook, Michelle; Bell, Lauren; Damadzic, Ruslan; Eskay, Robert L.; Schwandt, Melanie; Heilig, Markus
2010-01-01
Excessive alcohol use leads to neurodegeneration in several brain structures including the hippocampal dentate gyrus and the entorhinal cortex. Cognitive deficits that result are among the most insidious and debilitating consequences of alcoholism. The object exploration task (OET) provides a sensitive measurement of spatial memory impairment induced by hippocampal and cortical damage. In this study, we examine whether the observed neurotoxicity produced by a 4-day binge ethanol treatment results in long-term memory impairment by observing the time course of reactions to spatial change (object configuration) and non-spatial change (object recognition). Wistar rats were assessed for their abilities to detect spatial configuration in the OET at 1 week and 10 weeks following the ethanol treatment, in which ethanol groups received 9–15 g/kg/day and achieved blood alcohol levels over 300 mg/dl. At 1 week, results indicated that the binge alcohol treatment produced impairment in both spatial memory and non-spatial object recognition performance. Unlike the controls, ethanol treated rats did not increase the duration or number of contacts with the displaced object in the spatial memory task, nor did they increase the duration of contacts with the novel object in the object recognition task. After 10 weeks, spatial memory remained impaired in the ethanol treated rats but object recognition ability was recovered. Our data suggest that episodes of binge-like alcohol exposure result in long-term and possibly permanent impairments in memory for the configuration of objects during exploration, whereas the ability to detect non-spatial changes is only temporarily affected. PMID:20849966
ERIC Educational Resources Information Center
Brooks, Brian E.; Cooper, Eric E.
2006-01-01
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…
An information-processing model of three cortical regions: evidence in episodic memory retrieval.
Sohn, Myeong-Ho; Goode, Adam; Stenger, V Andrew; Jung, Kwan-Jin; Carter, Cameron S; Anderson, John R
2005-03-01
ACT-R (Anderson, J.R., et al., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261) relates the inferior dorso-lateral prefrontal cortex to a retrieval buffer that holds information retrieved from memory and the posterior parietal cortex to an imaginal buffer that holds problem representations. Because the number of changes in a problem representation is not necessarily correlated with retrieval difficulties, it is possible to dissociate prefrontal-parietal activations. In two fMRI experiments, we examined this dissociation using the fan effect paradigm. Experiment 1 compared a recognition task, in which representation requirement remains the same regardless of retrieval difficulty, with a recall task, in which both representation and retrieval loads increase with retrieval difficulty. In the recognition task, the prefrontal activation revealed a fan effect but not the parietal activation. In the recall task, both regions revealed fan effects. In Experiment 2, we compared visually presented stimuli and aurally presented stimuli using the recognition task. While only the prefrontal region revealed the fan effect, the activation patterns in the prefrontal and the parietal region did not differ by stimulus presentation modality. In general, these results provide support for the prefrontal-parietal dissociation in terms of retrieval and representation and the modality-independent nature of the information processed by these regions. Using ACT-R, we also provide computational models that explain patterns of fMRI responses in these two areas during recognition and recall.
Development of detection and recognition of orientation of geometric and real figures.
Stein, N L; Mandler, J M
1975-06-01
Black and white kindergarten and second-grade children were tested for accuracy of detection and recognition of orientation and location changes in pictures of real-world and geometric figures. No differences were found in accuracy of recognition between the 2 kinds of pictures, but patterns of verbalization differed on specific transformations. Although differences in accuracy were found between kindergarten and second grade on an initial recognition task, practice on a matching-to-sample task eliminated differences on a second recognition task. Few ethnic differences were found on accuracy of recognition, but significant differences were found in amount of verbal output on specific transformations. For both groups, mention of orientation changes was markedly reduced when location changes were present.
Cui, Xiaoyu; Gao, Chuanji; Zhou, Jianshe; Guo, Chunyan
2016-09-28
It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.
Söderlund, Göran B. W.; Jobs, Elisabeth Nilsson
2016-01-01
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure. PMID:26858679
Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel
2013-01-01
Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive interventions. PMID:23805122
When the face fits: recognition of celebrities from matching and mismatching faces and voices.
Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain
2014-01-01
The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.
Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task
López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa
2013-01-01
In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436
Effects of memory load on hemispheric asymmetries of colour memory.
Clapp, Wes; Kirk, Ian J; Hausmann, Markus
2007-03-01
Hemispheric asymmetries in colour perception have been a matter of debate for some time. Recent evidence suggests that lateralisation of colour processing may be largely task specific. Here we investigated hemispheric asymmetries during different types and phases of a delayed colour-matching (recognition) memory task. A total of 11 male and 12 female right-handed participants performed colour-memory tasks. The task involved presentation of a set of colour stimuli (encoding), and subsequent indication (forced choice) of which colours in a larger set had previously appeared at the retrieval or recognition phase. The effect of memory load (set size), and the effect of lateralisation at the encoding or retrieval phases were investigated. Overall, the results indicate a right hemisphere advantage in colour processing, which was particularly pronounced in high memory load conditions, and was seen in males rather than female participants. The results suggest that verbal (mnemonic) strategies can significantly affect the magnitude of hemispheric asymmetries in a non-verbal task.
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
Developmental plateau in visual object processing from adolescence to adulthood in autism
O'Hearn, Kirsten; Tanaka, James; Lynn, Andrew; Fedor, Jennifer; Minshew, Nancy; Luna, Beatriz
2016-01-01
A lack of typical age-related improvement from adolescence to adulthood contributes to face recognition deficits in adults with autism on the Cambridge Face Memory Test (CFMT). The current studies examine if this atypical developmental trajectory generalizes to other tasks and objects, including parts of the face. The CFMT tests recognition of whole faces, often with a substantial delay. The current studies used the immediate memory (IM) task and the parts-whole face task from the Let's Face It! battery, which examines whole faces, face parts, and cars, without a delay between memorization and test trials. In the IM task, participants memorize a face or car. Immediately after the target disappears, participants identify the target from two similar distractors. In the part-whole task, participants memorize a whole face. Immediately after the face disappears, participants identify the target from a distractor with different eyes or mouth, either as a face part or a whole face. Results indicate that recognition deficits in autism become more robust by adulthood, consistent with previous work, and also become more general, including cars. In the IM task, deficits in autism were specific to faces in childhood, but included cars by adulthood. In the part-whole task, deficits in autism became more robust by adulthood, including both eyes and mouths as parts and in whole faces. Across tasks, the deficit in autism increased between adolescence and adulthood, reflecting a lack of typical improvement, leading to deficits with non-face stimuli and on a task without a memory delay. These results suggest that brain maturation continues to be affected into adulthood in autism, and that the transition from adolescence to adulthood is a vulnerable stage for those with autism. PMID:25019999
Further insight into self-face recognition in schizophrenia patients: Why ambiguity matters.
Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N; Raffard, Stephane
2016-03-01
Although some studies reported specifically self-face processing deficits in patients with schizophrenia disorder (SZ), it remains unclear whether these deficits rather reflect a more global face processing deficit. Contradictory results are probably due to the different methodologies employed and the lack of control of other confounding factors. Moreover, no study has so far evaluated possible daily life self-face recognition difficulties in SZ. Therefore, our primary objective was to investigate self-face recognition in patients suffering from SZ compared to healthy controls (HC) using an "objective measure" (reaction time and accuracy) and a "subjective measure" (self-report of daily self-face recognition difficulties). Twenty-four patients with SZ and 23 HC performed a self-face recognition task and completed a questionnaire evaluating daily difficulties in self-face recognition. Recognition task material consisted in three different faces (the own, a famous and an unknown) being morphed in steps of 20%. Results showed that SZ were overall slower than HC regardless of the face identity, but less accurate only for the faces containing 60%-40% morphing. Moreover, SZ and HC reported a similar amount of daily problems with self/other face recognition. No significant correlations were found between objective and subjective measures (p > 0.05). The small sample size and relatively mild severity of psychopathology does not allow us to generalize our results. These results suggest that: (1) patients with SZ are as capable of recognizing their own face as HC, although they are susceptible to ambiguity; (2) there are far less self recognition deficits in schizophrenia patients than previously postulated. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Mulligan, Neil W.; Besken, Miri; Peterson, Daniel
2010-01-01
Remember-Know (RK) and source memory tasks were designed to elucidate processes underlying memory retrieval. As part of more complex judgments, both tests produce a measure of old-new recognition, which is typically treated as equivalent to that derived from a standard recognition task. The present study demonstrates, however, that recognition…
The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information
ERIC Educational Resources Information Center
Liu, Chang Hong; Ward, James; Markall, Helena
2007-01-01
Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
Gonzalez-Gadea, Maria Luz; Herrera, Eduar; Parra, Mario; Gomez Mendez, Pedro; Baez, Sandra; Manes, Facundo; Ibanez, Agustin
2014-01-01
Emotion recognition and empathy abilities require the integration of contextual information in real-life scenarios. Previous reports have explored these domains in adolescent offenders (AOs) but have not used tasks that replicate everyday situations. In this study we included ecological measures with different levels of contextual dependence to evaluate emotion recognition and empathy in AOs relative to non-offenders, controlling for the effect of demographic variables. We also explored the influence of fluid intelligence (FI) and executive functions (EFs) in the prediction of relevant deficits in these domains. Our results showed that AOs exhibit deficits in context-sensitive measures of emotion recognition and cognitive empathy. Difficulties in these tasks were neither explained by demographic variables nor predicted by FI or EFs. However, performance on measures that included simpler stimuli or could be solved by explicit knowledge was either only partially affected by demographic variables or preserved in AOs. These findings indicate that AOs show contextual social-cognition impairments which are relatively independent of basic cognitive functioning and demographic variables. PMID:25374529
Keys to the Adoption and Use of Voice Recognition Technology in Organizations.
ERIC Educational Resources Information Center
Goette, Tanya
2000-01-01
Presents results from a field study of individuals with disabilities who used voice recognition technology (VRT). Results indicated that task-technology fit, training, the environment, and disability limitations were the differentiating items, and that using VRT for a trial period may be the major factor in successful adoption of the technology.…
The effect of inversion on face recognition in adults with autism spectrum disorder.
Hedley, Darren; Brewer, Neil; Young, Robyn
2015-05-01
Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD performed worse than controls on the recognition task but did not show an advantage for inverted face recognition. Both groups directed more visual attention to the eye than the mouth region and gaze patterns were not found to be associated with recognition performance. These results provide evidence of a normal effect of inversion on face recognition in adults with ASD.
Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R
2011-11-01
Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.
Holdstock, J S; Mayes, A R; Roberts, N; Cezayirli, E; Isaac, C L; O'Reilly, R C; Norman, K A
2002-01-01
The claim that recognition memory is spared relative to recall after focal hippocampal damage has been disputed in the literature. We examined this claim by investigating object and object-location recall and recognition memory in a patient, YR, who has adult-onset selective hippocampal damage. Our aim was to identify the conditions under which recognition was spared relative to recall in this patient. She showed unimpaired forced-choice object recognition but clearly impaired recall, even when her control subjects found the object recognition task to be numerically harder than the object recall task. However, on two other recognition tests, YR's performance was not relatively spared. First, she was clearly impaired at an equivalently difficult yes/no object recognition task, but only when targets and foils were very similar. Second, YR was clearly impaired at forced-choice recognition of object-location associations. This impairment was also unrelated to difficulty because this task was no more difficult than the forced-choice object recognition task for control subjects. The clear impairment of yes/no, but not of forced-choice, object recognition after focal hippocampal damage, when targets and foils are very similar, is predicted by the neural network-based Complementary Learning Systems model of recognition. This model postulates that recognition is mediated by hippocampally dependent recollection and cortically dependent familiarity; thus hippocampal damage should not impair item familiarity. The model postulates that familiarity is ineffective when very similar targets and foils are shown one at a time and subjects have to identify which items are old (yes/no recognition). In contrast, familiarity is effective in discriminating which of similar targets and foils, seen together, is old (forced-choice recognition). Independent evidence from the remember/know procedure also indicates that YR's familiarity is normal. The Complementary Learning Systems model can also accommodate the clear impairment of forced-choice object-location recognition memory if it incorporates the view that the most complete convergence of spatial and object information, represented in different cortical regions, occurs in the hippocampus.
Ho, Michael R; Pezdek, Kathy
2016-06-01
The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition.
Higher-Order Neural Networks Applied to 2D and 3D Object Recognition
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Reid, Max B.
1994-01-01
A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.
A steady state visually evoked potential investigation of memory and ageing.
Macpherson, Helen; Pipingas, Andrew; Silberstein, Richard
2009-04-01
Old age is generally accompanied by a decline in memory performance. Specifically, neuroimaging and electrophysiological studies have revealed that there are age-related changes in the neural correlates of episodic and working memory. This study investigated age-associated changes in the steady state visually evoked potential (SSVEP) amplitude and latency associated with memory performance. Participants were 15 older (59-67 years) and 14 younger (20-30 years) adults who performed an object working memory (OWM) task and a contextual recognition memory (CRM) task, whilst the SSVEP was recorded from 64 electrode sites. Retention of a single object in the low demand OWM task was characterised by smaller frontal SSVEP amplitude and latency differences in older adults than in younger adults, indicative of an age-associated reduction in neural processes. Recognition of visual images in the more difficult CRM task was accompanied by larger, more sustained SSVEP amplitude and latency decreases over temporal parietal regions in older adults. In contrast, the more transient, frontally mediated pattern of activity demonstrated by younger adults suggests that younger and older adults utilize different neural resources to perform recognition judgements. The results provide support for compensatory processes in the aging brain; at lower task demands, older adults demonstrate reduced neural activity, whereas at greater task demands neural activity is increased.
Clustered Multi-Task Learning for Automatic Radar Target Recognition
Li, Cong; Bao, Weimin; Xu, Luping; Zhang, Hua
2017-01-01
Model training is a key technique for radar target recognition. Traditional model training algorithms in the framework of single task leaning ignore the relationships among multiple tasks, which degrades the recognition performance. In this paper, we propose a clustered multi-task learning, which can reveal and share the multi-task relationships for radar target recognition. To further make full use of these relationships, the latent multi-task relationships in the projection space are taken into consideration. Specifically, a constraint term in the projection space is proposed, the main idea of which is that multiple tasks within a close cluster should be close to each other in the projection space. In the proposed method, the cluster structures and multi-task relationships can be autonomously learned and utilized in both of the original and projected space. In view of the nonlinear characteristics of radar targets, the proposed method is extended to a non-linear kernel version and the corresponding non-linear multi-task solving method is proposed. Comprehensive experimental studies on simulated high-resolution range profile dataset and MSTAR SAR public database verify the superiority of the proposed method to some related algorithms. PMID:28953267
About-face on face recognition ability and holistic processing
Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel
2015-01-01
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027
About-face on face recognition ability and holistic processing.
Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel
2015-01-01
Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.
Chemical entity recognition in patents by combining dictionary-based and statistical approaches.
Akhondi, Saber A; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F H; Hettne, Kristina M; van Mulligen, Erik M; Kors, Jan A
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small.Database URL: http://biosemantics.org/chemdner-patents. © The Author(s) 2016. Published by Oxford University Press.
The mere exposure effect and recognition depend on the way you look!
Willems, Sylvie; Dedonder, Jonathan; Van der Linden, Martial
2010-01-01
In line with Whittlesea and Price (2001), we investigated whether the memory effect measured with an implicit memory paradigm (mere exposure effect) and an explicit recognition task depended on perceptual processing strategies, regardless of whether the task required intentional retrieval. We found that manipulation intended to prompt functional implicit-explicit dissociation no longer had a differential effect when we induced similar perceptual strategies in both tasks. Indeed, the results showed that prompting a nonanalytic strategy ensured performance above chance on both tasks. Conversely, inducing an analytic strategy drastically decreased both explicit and implicit performance. Furthermore, we noted that the nonanalytic strategy involved less extensive gaze scanning than the analytic strategy and that memory effects under this processing strategy were largely independent of gaze movement.
Optimizing estimation of hemispheric dominance for language using magnetic source imaging.
Passaro, Antony D; Rezaie, Roozbeh; Moser, Dana C; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C
2011-10-06
The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10-18Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10-18Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. Published by Elsevier B.V.
Grossberg, Stephen; Markowitz, Jeffrey; Cao, Yongqiang
2011-12-01
Visual object recognition is an essential accomplishment of advanced brains. Object recognition needs to be tolerant, or invariant, with respect to changes in object position, size, and view. In monkeys and humans, a key area for recognition is the anterior inferotemporal cortex (ITa). Recent neurophysiological data show that ITa cells with high object selectivity often have low position tolerance. We propose a neural model whose cells learn to simulate this tradeoff, as well as ITa responses to image morphs, while explaining how invariant recognition properties may arise in stages due to processes across multiple cortical areas. These processes include the cortical magnification factor, multiple receptive field sizes, and top-down attentive matching and learning properties that may be tuned by task requirements to attend to either concrete or abstract visual features with different levels of vigilance. The model predicts that data from the tradeoff and image morph tasks emerge from different levels of vigilance in the animals performing them. This result illustrates how different vigilance requirements of a task may change the course of category learning, notably the critical features that are attended and incorporated into learned category prototypes. The model outlines a path for developing an animal model of how defective vigilance control can lead to symptoms of various mental disorders, such as autism and amnesia. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sources of Interference in Recognition Testing
ERIC Educational Resources Information Center
Annis, Jeffrey; Malmberg, Kenneth J.; Criss, Amy H.; Shiffrin, Richard M.
2013-01-01
Recognition memory accuracy is harmed by prior testing (a.k.a., output interference [OI]; Tulving & Arbuckle, 1966). In several experiments, we interpolated various tasks between recognition test trials. The stimuli and the tasks were more similar (lexical decision [LD] of words and nonwords) or less similar (gender identification of male and…
Calvo, Manuel G; Nummenmaa, Lauri
2009-12-01
Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.
Word Spotting and Recognition with Embedded Attributes.
Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest
2014-12-01
This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.
Wang, Lei; Apple, Alexandra C; Schroeder, Matthew P; Ryals, Anthony J; Voss, Joel L; Gitelman, Darren; Sweet, Jerry J; Butt, Zeeshan A; Cella, David; Wagner, Lynne I
2016-01-15
Patients who receive adjuvant chemotherapy have reported cognitive impairments that may last for years after the completion of treatment. Working memory-related and long-term memory-related changes in this population are not well understood. The objective of this study was to demonstrate that cancer-related cognitive impairments are associated with the under recruitment of brain regions involved in working and recognition memory compared with controls. Oncology patients (n = 15) who were receiving adjuvant chemotherapy and had evidence of cognitive impairment according to neuropsychological testing and self-report and a group of age-matched, education group-matched, cognitively normal control participants (n = 14) underwent functional magnetic resonance imaging. During functional magnetic resonance imaging, participants performed a nonverbal n-back working memory task and a visual recognition task. On the working memory task, when 1-back and 2-back data were averaged and contrasted with 0-back data, significantly reduced activation was observed in the right dorsolateral prefrontal cortex for oncology patients versus controls. On the recognition task, oncology patients displayed decreased activity of the left-middle hippocampus compared with controls. Neuroimaging results were not associated with patient-reported cognition. Decreased recruitment of brain regions associated with the encoding of working memory and recognition memory was observed in the oncology patients compared with the control group. These results suggest that there is a reduction in neural functioning postchemotherapy and corroborate patient-reported cognitive difficulties after cancer treatment, although a direct association was not observed. Cancer 2016;122:258-268. © 2015 American Cancer Society. © 2015 American Cancer Society.
Cippitelli, Andrea; Zook, Michelle; Bell, Lauren; Damadzic, Ruslan; Eskay, Robert L; Schwandt, Melanie; Heilig, Markus
2010-11-01
Excessive alcohol use leads to neurodegeneration in several brain structures including the hippocampal dentate gyrus and the entorhinal cortex. Cognitive deficits that result are among the most insidious and debilitating consequences of alcoholism. The object exploration task (OET) provides a sensitive measurement of spatial memory impairment induced by hippocampal and cortical damage. In this study, we examine whether the observed neurotoxicity produced by a 4-day binge ethanol treatment results in long-term memory impairment by observing the time course of reactions to spatial change (object configuration) and non-spatial change (object recognition). Wistar rats were assessed for their abilities to detect spatial configuration in the OET at 1 week and 10 weeks following the ethanol treatment, in which ethanol groups received 9-15 g/kg/day and achieved blood alcohol levels over 300 mg/dl. At 1 week, results indicated that the binge alcohol treatment produced impairment in both spatial memory and non-spatial object recognition performance. Unlike the controls, ethanol treated rats did not increase the duration or number of contacts with the displaced object in the spatial memory task, nor did they increase the duration of contacts with the novel object in the object recognition task. After 10 weeks, spatial memory remained impaired in the ethanol treated rats but object recognition ability was recovered. Our data suggest that episodes of binge-like alcohol exposure result in long-term and possibly permanent impairments in memory for the configuration of objects during exploration, whereas the ability to detect non-spatial changes is only temporarily affected. Copyright © 2010 Elsevier Inc. All rights reserved.
Social Cognition Psychometric Evaluation: Results of the Initial Psychometric Study
Pinkham, Amy E.; Penn, David L.; Green, Michael F.; Harvey, Philip D.
2016-01-01
Measurement of social cognition in treatment trials remains problematic due to poor and limited psychometric data for many tasks. As part of the Social Cognition Psychometric Evaluation (SCOPE) study, the psychometric properties of 8 tasks were assessed. One hundred and seventy-nine stable outpatients with schizophrenia and 104 healthy controls completed the battery at baseline and a 2–4-week retest period at 2 sites. Tasks included the Ambiguous Intentions Hostility Questionnaire (AIHQ), Bell Lysaker Emotion Recognition Task (BLERT), Penn Emotion Recognition Task (ER-40), Relationships Across Domains (RAD), Reading the Mind in the Eyes Task (Eyes), The Awareness of Social Inferences Test (TASIT), Hinting Task, and Trustworthiness Task. Tasks were evaluated on: (i) test-retest reliability, (ii) utility as a repeated measure, (iii) relationship to functional outcome, (iv) practicality and tolerability, (v) sensitivity to group differences, and (vi) internal consistency. The BLERT and Hinting task showed the strongest psychometric properties across all evaluation criteria and are recommended for use in clinical trials. The ER-40, Eyes Task, and TASIT showed somewhat weaker psychometric properties and require further study. The AIHQ, RAD, and Trustworthiness Task showed poorer psychometric properties that suggest caution for their use in clinical trials. PMID:25943125
Taming a wandering attention: short-form mindfulness training in student cohorts.
Morrison, Alexandra B; Goolsarran, Merissa; Rogers, Scott L; Jha, Amishi P
2014-01-06
Mindfulness training (MT) is a form of mental training in which individuals engage in exercises to cultivate an attentive, present centered, and non-reactive mental mode. The present study examines the putative benefits of MT in University students for whom mind wandering can interfere with learning and academic success. We tested the hypothesis that short-form MT (7 h over 7 weeks) contextualized for the challenges and concerns of University students may reduce mind wandering and improve working memory. Performance on the sustained attention to response task (SART) and two working memory tasks (operation span, delayed-recognition with distracters) was indexed in participants assigned to a waitlist control group or the MT course. Results demonstrated MT-related benefits in SART performance. Relative to the control group, MT participants had higher task accuracy and self-reported being more "on-task" after the 7-week training period. MT did not significantly benefit the operation span task or accuracy on the delayed-recognition task. Together these results suggest that while short-form MT did not bolster working memory task performance, it may help curb mind wandering and should, therefore, be further investigated for its use in academic contexts.
Paris, Jason J; Frye, Cheryl A
2008-01-01
Ovarian hormone elevations are associated with enhanced learning/memory. During behavioral estrus or pregnancy, progestins, such as progesterone (P4) and its metabolite 5α-pregnan-3α-ol-20-one (3α,5α-THP), are elevated due, in part, to corpora luteal and placental secretion. During ‘pseudopregnancy’, the induction of corpora luteal functioning results in a hormonal milieu analogous to pregnancy, which ceases after about 12 days, due to the lack of placental formation. Multiparity is also associated with enhanced learning/memory, perhaps due to prior steroid exposure during pregnancy. Given evidence that progestins and/or parity may influence cognition, we investigated how natural alterations in the progestin milieu influence cognitive performance. In Experiment 1, virgin rats (nulliparous) or rats with two prior pregnancies (multiparous) were assessed on the object placement and recognition tasks, when in high-estrogen/P4 (behavioral estrus) or low-estrogen/P4 (diestrus) phases of the estrous cycle. In Experiment 2, primiparous or multiparous rats were tested in the object placement and recognition tasks when not pregnant, pseudopregnant, or pregnant (between gestational days (GDs) 6 and 12). In Experiment 3, pregnant primiparous or multiparous rats were assessed daily in the object placement or recognition tasks. Females in natural states associated with higher endogenous progestins (behavioral estrus, pregnancy, multiparity) outperformed rats in low progestin states (diestrus, non-pregnancy, nulliparity) on the object placement and recognition tasks. In earlier pregnancy, multiparous, compared with primiparous, rats had a lower corticosterone, but higher estrogen levels, concomitant with better object placement performance. From GD 13 until post partum, primiparous rats had higher 3α,5α-THP levels and improved object placement performance compared with multiparous rats. PMID:18390689
Raj, Vidya; Liang, Han-Chun; Woodward, Neil D.; Bauernfeind, Amy L.; Lee, Junghee; Dietrich, Mary; Park, Sohee; Cowan, Ronald L.
2011-01-01
Objectives MDMA users have impaired verbal memory, and voxel-based morphometry has demonstrated decreased gray matter in Brodmann area (BA) 18, 21 and 45. Because these regions play a role in verbal memory, we hypothesized that MDMA users would show altered brain activation in these areas during performance of an fMRI task that probed semantic verbal memory. Methods Polysubstance users enriched for MDMA exposure participated in a semantic memory encoding and recognition fMRI task that activated left BA 9, 18, 21/22 and 45. Primary outcomes were percent BOLD signal change in left BA 9, 18, 21/22 and 45, accuracy and response time. Results During semantic recognition, lifetime MDMA use was associated with decreased activation in left BA 9, 18 and 21/22 but not 45. This was partly influenced by contributions from cannabis and cocaine use. MDMA exposure was not associated with accuracy or response time during the semantic recognition task. Conclusions During semantic recognition, MDMA exposure is associated with reduced regional brain activation in regions mediating verbal memory. These findings partially overlap with prior structural evidence for reduced gray matter in MDMA users and may, in part, explain the consistent verbal memory impairments observed in other studies of MDMA users. PMID:19304866
Zucco, Gesualdo M; Bollini, Fabiola
2011-12-30
Olfactory deficits, in detection, recognition and identification of odorants have been documented in ageing and in several neurodegenerative and psychiatric conditions. However, olfactory abilities in Major Depressive Disorder (MDD) have been less investigated, and available studies have provided inconsistent results. The present study assessed odour recognition memory and odour identification in two groups of 12 mild MDD patients (M age 41.3, range 25-57) and 12 severe MDD patients (M age, 41.9, range 23-58) diagnosed according to DSM-IV criteria and matched for age and gender to 12 healthy normal controls. The suitability of olfactory identification and recognition memory tasks as predictors of the progression of MDD was also addressed. Data analyses revealed that Severe MDD patients performed significantly worse than Mild MDD patients and Normal controls on both tasks, with these last groups not differing significantly from one another. The present outcomes are consistent with previous studies in other domains which have shown reliable, although not conclusive, impairments in cognitive function, including memory, in patients with MDD, and highlight the role of olfactory identification and recognition tasks as an important additional tool to discriminate between patients characterised by different levels of severity of MDD. Copyright © 2011 Elsevier Ltd. All rights reserved.
Locality constrained joint dynamic sparse representation for local matching based face recognition.
Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun
2014-01-01
Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
Impact of Intention on the ERP Correlates of Face Recognition
ERIC Educational Resources Information Center
Guillaume, Fabrice; Tiberghien, Guy
2013-01-01
The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that…
Electrophysiological distinctions between recognition memory with and without awareness
Ko, Philip C.; Duda, Bryant; Hussey, Erin P.; Ally, Brandon A.
2013-01-01
The influence of implicit memory representations on explicit recognition may help to explain cases of accurate recognition decisions made with high uncertainty. During a recognition task, implicit memory may enhance the fluency of a test item, biasing decision processes to endorse it as “old”. This model may help explain recognition-without-identification, a remarkable phenomenon in which participants make highly accurate recognition decisions despite the inability to identify the test item. The current study investigated whether recognition-without-identification for pictures elicits a similar pattern of neural activity as other types of accurate recognition decisions made with uncertainty. Further, this study also examined whether recognition-without-identification for pictures could be attained by the use of perceptual and conceptual information from memory. To accomplish this, participants studied pictures and then performed a recognition task under difficult viewing conditions while event-related potentials (ERPs) were recorded. Behavioral results showed that recognition was highly accurate even when test items could not be identified, demonstrating recognition-without identification. The behavioral performance also indicated that recognition-without-identification was mediated by both perceptual and conceptual information, independently of one another. The ERP results showed dramatically different memory related activity during the early 300 to 500 ms epoch for identified items that were studied compared to unidentified items that were studied. Similar to previous work highlighting accurate recognition without retrieval awareness, test items that were not identified, but correctly endorsed as “old,” elicited a negative posterior old/new effect (i.e., N300). In contrast, test items that were identified and correctly endorsed as “old,” elicited the classic positive frontal old/new effect (i.e., FN400). Importantly, both of these effects were elicited under conditions when participants used perceptual information to make recognition decisions. Conceptual information elicited very different ERPs than perceptual information, showing that the informational wealth of pictures can evoke multiple routes to recognition even without awareness of memory retrieval. These results are discussed within the context of current theories regarding the N300 and the FN400. PMID:23287567
Using eye movements as an index of implicit face recognition in autism spectrum disorder.
Hedley, Darren; Young, Robyn; Brewer, Neil
2012-10-01
Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.
Scotland, Jennifer L; McKenzie, Karen; Cossar, Jill; Murray, Aja; Michie, Amanda
2016-01-01
This study aimed to evaluate the emotion recognition abilities of adults (n=23) with an intellectual disability (ID) compared with a control group of children (n=23) without ID matched for estimated cognitive ability. The study examined the impact of: task paradigm, stimulus type and preferred processing style (global/local) on accuracy. We found that, after controlling for estimated cognitive ability, the control group performed significantly better than the individuals with ID. This provides some support for the emotion specificity hypothesis. Having a more local processing style did not significantly mediate the relation between having ID and emotion recognition, but did significantly predict emotion recognition ability after controlling for group. This suggests that processing style is related to emotion recognition independently of having ID. The availability of contextual information improved emotion recognition for people with ID when compared with line drawing stimuli, and identifying a target emotion from a choice of two was relatively easier for individuals with ID, compared with the other task paradigms. The results of the study are considered in the context of current theories of emotion recognition deficits in individuals with ID. Copyright © 2015 Elsevier Ltd. All rights reserved.
Context-dependent similarity effects in letter recognition.
Kinoshita, Sachiko; Robidoux, Serje; Guilbert, Daniel; Norris, Dennis
2015-10-01
In visual word recognition tasks, digit primes that are visually similar to letter string targets (e.g., 4/A, 8/B) are known to facilitate letter identification relative to visually dissimilar digits (e.g., 6/A, 7/B); in contrast, with letter primes, visual similarity effects have been elusive. In the present study we show that the visual similarity effect with letter primes can be made to come and go, depending on whether it is necessary to discriminate between visually similar letters. The results support a Bayesian view which regards letter recognition not as a passive activation process driven by the fixed stimulus properties, but as a dynamic evidence accumulation process for a decision that is guided by the task context.
Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction
NASA Astrophysics Data System (ADS)
Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.
1994-04-01
Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.
Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings
Pisoni, David B.
2015-01-01
Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884
Xia, Jing; Nooraei, Nazanin; Kalluri, Sridhar; Edwards, Brent
2015-04-01
This study investigated whether spatial separation between talkers helps reduce cognitive processing load, and how hearing impairment interacts with the cognitive load of individuals listening in multi-talker environments. A dual-task paradigm was used in which performance on a secondary task (visual tracking) served as a measure of the cognitive load imposed by a speech recognition task. Visual tracking performance was measured under four conditions in which the target and the interferers were distinguished by (1) gender and spatial location, (2) gender only, (3) spatial location only, and (4) neither gender nor spatial location. Results showed that when gender cues were available, a 15° spatial separation between talkers reduced the cognitive load of listening even though it did not provide further improvement in speech recognition (Experiment I). Compared to normal-hearing listeners, large individual variability in spatial release of cognitive load was observed among hearing-impaired listeners. Cognitive load was lower when talkers were spatially separated by 60° than when talkers were of different genders, even though speech recognition was comparable in these two conditions (Experiment II). These results suggest that a measure of cognitive load might provide valuable insight into the benefit of spatial cues in multi-talker environments.
Functional differences among those high and low on a trait measure of psychopathy.
Gordon, Heather L; Baird, Abigail A; End, Alison
2004-10-01
It has been established that individuals who score high on measures of psychopathy demonstrate difficulty when performing tasks requiring the interpretation of other's emotional states. The aim of this study was to elucidate the relation of emotion and cognition to individual differences on a standard psychopathy personality inventory (PPI) among a nonpsychiatric population. Twenty participants completed the PPI. Following survey completion, a mean split of their scores on the emotional-interpersonal factor was performed, and participants were placed into a high or low group. Functional magnetic resonance imaging data were collected while participants performed a recognition task that required attention be given to either the affect or identity of target stimuli. No significant behavioral differences were found. In response to the affect recognition task, significant differences between high- and low-scoring subjects were observed in several subregions of the frontal cortex, as well as the amygdala. No significant differences were found between the groups in response to the identity recognition condition. Results indicate that participants scoring high on the PPI, although not behaviorally distinct, demonstrate a significantly different pattern of neural activity (as measured by blood oxygen level-dependent contrast)in response to tasks that require affective processing. The results suggest a unique neural signature associated with personality differences in a nonpsychiatric population.
Onojima, Takayuki; Kitajo, Keiichi; Mizuhara, Hiroaki
2017-01-01
Neural oscillation is attracting attention as an underlying mechanism for speech recognition. Speech intelligibility is enhanced by the synchronization of speech rhythms and slow neural oscillation, which is typically observed as human scalp electroencephalography (EEG). In addition to the effect of neural oscillation, it has been proposed that speech recognition is enhanced by the identification of a speaker's motor signals, which are used for speech production. To verify the relationship between the effect of neural oscillation and motor cortical activity, we measured scalp EEG, and simultaneous EEG and functional magnetic resonance imaging (fMRI) during a speech recognition task in which participants were required to recognize spoken words embedded in noise sound. We proposed an index to quantitatively evaluate the EEG phase effect on behavioral performance. The results showed that the delta and theta EEG phase before speech inputs modulated the participant's response time when conducting speech recognition tasks. The simultaneous EEG-fMRI experiment showed that slow EEG activity was correlated with motor cortical activity. These results suggested that the effect of the slow oscillatory phase was associated with the activity of the motor cortex during speech recognition.
Social Recognition Memory Requires Two Stages of Protein Synthesis in Mice
ERIC Educational Resources Information Center
Wolf, Gerald; Engelmann, Mario; Richter, Karin
2005-01-01
Olfactory recognition memory was tested in adult male mice using a social discrimination task. The testing was conducted to begin to characterize the role of protein synthesis and the specific brain regions associated with activity in this task. Long-term olfactory recognition memory was blocked when the protein synthesis inhibitor anisomycin was…
The effect of encoding strategy on the neural correlates of memory for faces.
Bernstein, Lori J; Beig, Sania; Siegenthaler, Amy L; Grady, Cheryl L
2002-01-01
Encoding and recognition of unfamiliar faces in young adults were examined using positron emission tomography to determine whether different encoding strategies would lead to encoding/retrieval differences in brain activity. Three types of encoding were compared: a 'deep' task (judging pleasantness/unpleasantness), a 'shallow' task (judging right/left orientation), and an intentional learning task in which subjects were instructed to learn the faces for a subsequent memory test but were not provided with a specific strategy. Memory for all faces was tested with an old/new recognition test. A modest behavioral effect was obtained, with deeply-encoded faces being recognized more accurately than shallowly-encoded or intentionally-learned faces. Regardless of encoding strategy, encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. Within encoding, the type of strategy produced different brain activity patterns, with deep encoding being characterized by left amygdala and left anterior cingulate activation. There was no effect of encoding strategy on brain activity during the recognition conditions. Posterior fusiform gyrus activation was related to better recognition accuracy in those conditions encouraging perceptual strategies, whereas activity in left frontal and temporal areas correlated with better performance during the 'deep' condition. Results highlight three important aspects of face memory: (1) the effect of encoding strategy was seen only at encoding and not at recognition; (2) left inferior prefrontal cortex was engaged during encoding of faces regardless of strategy; and (3) differential activity in fusiform gyrus was found, suggesting that activity in this area is not only a result of automatic face processing but is modulated by controlled processes.
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
2014-12-01
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.
Action Recognition in a Crowded Environment
Nieuwenhuis, Judith; Bülthoff, Isabelle; Barraclough, Nick; de la Rosa, Stephan
2017-01-01
So far, action recognition has been mainly examined with small point-light human stimuli presented alone within a narrow central area of the observer’s visual field. Yet, we need to recognize the actions of life-size humans viewed alone or surrounded by bystanders, whether they are seen in central or peripheral vision. Here, we examined the mechanisms in central vision and far periphery (40° eccentricity) involved in the recognition of the actions of a life-size actor (target) and their sensitivity to the presence of a crowd surrounding the target. In Experiment 1, we used an action adaptation paradigm to probe whether static or idly moving crowds might interfere with the recognition of a target’s action (hug or clap). We found that this type of crowds whose movements were dissimilar to the target action hardly affected action recognition in central and peripheral vision. In Experiment 2, we examined whether crowd actions that were more similar to the target actions affected action recognition. Indeed, the presence of that crowd diminished adaptation aftereffects in central vision as wells as in the periphery. We replicated Experiment 2 using a recognition task instead of an adaptation paradigm. With this task, we found evidence of decreased action recognition accuracy, but this was significant in peripheral vision only. Our results suggest that the presence of a crowd carrying out actions similar to that of the target affects its recognition. We outline how these results can be understood in terms of high-level crowding effects that operate on action-sensitive perceptual channels. PMID:29308177
Mitchnick, Krista A; Wideman, Cassidy E; Huff, Andrew E; Palmer, Daniel; McNaughton, Bruce L; Winters, Boyer D
2018-05-15
The capacity to recognize objects from different view-points or angles, referred to as view-invariance, is an essential process that humans engage in daily. Currently, the ability to investigate the neurobiological underpinnings of this phenomenon is limited, as few ethologically valid view-invariant object recognition tasks exist for rodents. Here, we report two complementary, novel view-invariant object recognition tasks in which rodents physically interact with three-dimensional objects. Prior to experimentation, rats and mice were given extensive experience with a set of 'pre-exposure' objects. In a variant of the spontaneous object recognition task, novelty preference for pre-exposed or new objects was assessed at various angles of rotation (45°, 90° or 180°); unlike control rodents, for whom the objects were novel, rats and mice tested with pre-exposed objects did not discriminate between rotated and un-rotated objects in the choice phase, indicating substantial view-invariant object recognition. Secondly, using automated operant touchscreen chambers, rats were tested on pre-exposed or novel objects in a pairwise discrimination task, where the rewarded stimulus (S+) was rotated (180°) once rats had reached acquisition criterion; rats tested with pre-exposed objects re-acquired the pairwise discrimination following S+ rotation more effectively than those tested with new objects. Systemic scopolamine impaired performance on both tasks, suggesting involvement of acetylcholine at muscarinic receptors in view-invariant object processing. These tasks present novel means of studying the behavioral and neural bases of view-invariant object recognition in rodents. Copyright © 2018 Elsevier B.V. All rights reserved.
Activity Recognition for Personal Time Management
NASA Astrophysics Data System (ADS)
Prekopcsák, Zoltán; Soha, Sugárka; Henk, Tamás; Gáspár-Papanek, Csaba
We describe an accelerometer based activity recognition system for mobile phones with a special focus on personal time management. We compare several data mining algorithms for the automatic recognition task in the case of single user and multiuser scenario, and improve accuracy with heuristics and advanced data mining methods. The results show that daily activities can be recognized with high accuracy and the integration with the RescueTime software can give good insights for personal time management.
EEG based topography analysis in string recognition task
NASA Astrophysics Data System (ADS)
Ma, Xiaofei; Huang, Xiaolin; Shen, Yuxiaotong; Qin, Zike; Ge, Yun; Chen, Ying; Ning, Xinbao
2017-03-01
Vision perception and recognition is a complex process, during which different parts of brain are involved depending on the specific modality of the vision target, e.g. face, character, or word. In this study, brain activities in string recognition task compared with idle control state are analyzed through topographies based on multiple measurements, i.e. sample entropy, symbolic sample entropy and normalized rhythm power, extracted from simultaneously collected scalp EEG. Our analyses show that, for most subjects, both symbolic sample entropy and normalized gamma power in string recognition task are significantly higher than those in idle state, especially at locations of P4, O2, T6 and C4. It implies that these regions are highly involved in string recognition task. Since symbolic sample entropy measures complexity, from the perspective of new information generation, and normalized rhythm power reveals the power distributions in frequency domain, complementary information about the underlying dynamics can be provided through the two types of indices.
Repetition and brain potentials when recognizing natural scenes: task and emotion differences
Bradley, Margaret M.; Codispoti, Maurizio; Karlsson, Marie; Lang, Peter J.
2013-01-01
Repetition has long been known to facilitate memory performance, but its effects on event-related potentials (ERPs), measured as an index of recognition memory, are less well characterized. In Experiment 1, effects of both massed and distributed repetition on old–new ERPs were assessed during an immediate recognition test that followed incidental encoding of natural scenes that also varied in emotionality. Distributed repetition at encoding enhanced both memory performance and the amplitude of an old–new ERP difference over centro-parietal sensors. To assess whether these repetition effects reflect encoding or retrieval differences, the recognition task was replaced with passive viewing of old and new pictures in Experiment 2. In the absence of an explicit recognition task, ERPs were completely unaffected by repetition at encoding, and only emotional pictures prompted a modestly enhanced old–new difference. Taken together, the data suggest that repetition facilitates retrieval processes and that, in the absence of an explicit recognition task, differences in old–new ERPs are only apparent for affective cues. PMID:22842817
Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M
2002-09-01
(1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. 30 subjects with AMD (age range 66-90 years; visual acuity 0.4-1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = -0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = -0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance.
Face-name association learning and brain structural substrates in alcoholism.
Pitel, Anne-Lise; Chanraud, Sandra; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V
2012-07-01
Associative learning is required for face-name association and is impaired in alcoholism, but the cognitive processes and brain structural components underlying this deficit remain unclear. It is also unknown whether prompting alcoholics to implement a deep level of processing during face-name encoding would enhance performance. Abstinent alcoholics and controls performed a levels-of-processing face-name learning task. Participants indicated whether the face was that of an honest person (deep encoding) or that of a man (shallow encoding). Retrieval was examined using an associative (face-name) recognition task and a single-item (face or name only) recognition task. Participants also underwent 3T structural MRI. Compared with controls, alcoholics had poorer associative and single-item learning and performed at similar levels. Level of processing at encoding had little effect on recognition performance but affected reaction time (RT). Correlations with brain volumes were generally modest and based primarily on RT in alcoholics, where the deeper the processing at encoding, the more restricted the correlations with brain volumes. In alcoholics, longer control task RTs correlated modestly with smaller tissue volumes across several anterior to posterior brain regions; shallow encoding correlated with calcarine and striatal volumes; deep encoding correlated with precuneus and parietal volumes; and associative recognition RT correlated with cerebellar volumes. In controls, poorer associative recognition with deep encoding correlated significantly with smaller volumes of frontal and striatal structures. Despite prompting, alcoholics did not take advantage of encoding memoranda at a deep level to enhance face-name recognition accuracy. Nonetheless, conditions of deeper encoding resulted in faster RTs and more specific relations with regional brain volumes than did shallow encoding. The normal relation between associative recognition and corticostriatal volumes was not present in alcoholics. Rather, their speeded RTs occurred at the expense of accuracy and were related most robustly to cerebellar volumes. Copyright © 2012 by the Research Society on Alcoholism.
Task-Dependent Masked Priming Effects in Visual Word Recognition
Kinoshita, Sachiko; Norris, Dennis
2012-01-01
A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316
Influence of auditory attention on sentence recognition captured by the neural phase.
Müller, Jana Annina; Kollmeier, Birger; Debener, Stefan; Brand, Thomas
2018-03-07
The aim of this study was to investigate whether attentional influences on speech recognition are reflected in the neural phase entrained by an external modulator. Sentences were presented in 7 Hz sinusoidally modulated noise while the neural response to that modulation frequency was monitored by electroencephalogram (EEG) recordings in 21 participants. We implemented a selective attention paradigm including three different attention conditions while keeping physical stimulus parameters constant. The participants' task was either to repeat the sentence as accurately as possible (speech recognition task), to count the number of decrements implemented in modulated noise (decrement detection task), or to do both (dual task), while the EEG was recorded. Behavioural analysis revealed reduced performance in the dual task condition for decrement detection, possibly reflecting limited cognitive resources. EEG analysis revealed no significant differences in power for the 7 Hz modulation frequency, but an attention-dependent phase difference between tasks. Further phase analysis revealed a significant difference 500 ms after sentence onset between trials with correct and incorrect responses for speech recognition, indicating that speech recognition performance and the neural phase are linked via selective attention mechanisms, at least shortly after sentence onset. However, the neural phase effects identified were small and await further investigation. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Pitarque, Alfonso; Meléndez, Juan C; Sales, Alicia; Mayordomo, Teresa; Satorres, Encar; Escudero, Joaquín; Algarabel, Salvador
2016-10-01
Given the uneven experimental results in the literature regarding whether or not familiarity declines with healthy aging and cognitive impairment, we compare four samples (healthy young people, healthy older people, older people with amnestic mild cognitive impairment - aMCI -, and older people with Alzheimer's disease - AD -) on an associative recognition task, which, following the logic of the process-dissociation procedure, allowed us to obtain corrected estimates of recollection, familiarity and false recognition. The results show that familiarity does not decline with healthy aging, but it does with cognitive impairment, whereas false recognition increases with healthy aging, but declines significantly with cognitive impairment. These results support the idea that the deficits detected in recollection, familiarity, or false recognition in older people could be used as early prodromal markers of cognitive impairment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kaewphan, Suwisa; Van Landeghem, Sofie; Ohta, Tomoko; Van de Peer, Yves; Ginter, Filip; Pyysalo, Sampo
2016-01-01
Motivation: The recognition and normalization of cell line names in text is an important task in biomedical text mining research, facilitating for instance the identification of synthetically lethal genes from the literature. While several tools have previously been developed to address cell line recognition, it is unclear whether available systems can perform sufficiently well in realistic and broad-coverage applications such as extracting synthetically lethal genes from the cancer literature. In this study, we revisit the cell line name recognition task, evaluating both available systems and newly introduced methods on various resources to obtain a reliable tagger not tied to any specific subdomain. In support of this task, we introduce two text collections manually annotated for cell line names: the broad-coverage corpus Gellus and CLL, a focused target domain corpus. Results: We find that the best performance is achieved using NERsuite, a machine learning system based on Conditional Random Fields, trained on the Gellus corpus and supported with a dictionary of cell line names. The system achieves an F-score of 88.46% on the test set of Gellus and 85.98% on the independently annotated CLL corpus. It was further applied at large scale to 24 302 102 unannotated articles, resulting in the identification of 5 181 342 cell line mentions, normalized to 11 755 unique cell line database identifiers. Availability and implementation: The manually annotated datasets, the cell line dictionary, derived corpora, NERsuite models and the results of the large-scale run on unannotated texts are available under open licenses at http://turkunlp.github.io/Cell-line-recognition/. Contact: sukaew@utu.fi PMID:26428294
Emotion recognition and oxytocin in patients with schizophrenia
Averbeck, B. B.; Bobin, T.; Evans, S.; Shergill, S. S.
2012-01-01
Background Studies have suggested that patients with schizophrenia are impaired at recognizing emotions. Recently, it has been shown that the neuropeptide oxytocin can have beneficial effects on social behaviors. Method To examine emotion recognition deficits in patients and see whether oxytocin could improve these deficits, we carried out two experiments. In the first experiment we recruited 30 patients with schizophrenia and 29 age- and IQ-matched control subjects, and gave them an emotion recognition task. Following this, we carried out a second experiment in which we recruited 21 patients with schizophrenia for a double-blind, placebo-controlled cross-over study of the effects of oxytocin on the same emotion recognition task. Results In the first experiment we found that patients with schizophrenia had a deficit relative to controls in recognizing emotions. In the second experiment we found that administration of oxytocin improved the ability of patients to recognize emotions. The improvement was consistent and occurred for most emotions, and was present whether patients were identifying morphed or non-morphed faces. Conclusions These data add to a growing literature showing beneficial effects of oxytocin on social–behavioral tasks, as well as clinical symptoms. PMID:21835090
Robust relationship between reading span and speech recognition in noise.
Souza, Pamela; Arehart, Kathryn
2015-01-01
Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition.
Sandford, Adam; Burton, A Mike
2014-09-01
Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. Copyright © 2014 Elsevier B.V. All rights reserved.
Emotion-attention interactions in recognition memory for distractor faces.
Srinivasan, Narayanan; Gupta, Rashmi
2010-04-01
Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.
Kristjánsson, Arni
2009-04-24
Previously demonstrated learning effects in shifts of transient attention have only been shown to result in beneficial effects upon secondary discrimination tasks and affect landing points of express saccades. Can such learning result in more direct effects upon perception than previously demonstrated? Observers performed a cued Vernier acuity discrimination task where the cue was one of a set of ambiguous figure-ground displays (with a black and white part). The critical measure was whether, if a target appeared consistently within a part of a cue of a certain brightness, this would result in learning effects and whether such learning would then affect recognition of the cue parts. Critically the target always appeared within the same part of each individual cue. Some cues were used in early parts of streaks of repetition of cue-part brightness, and others in latter parts of such streaks. All the observers showed learning in shifts of transient attention, with improved performance the more often the target appeared within the part of the cue of the same brightness. Subsequently the observers judged whether cue-parts had been parts of the cues used on the preceding discrimination task. Recognition of the figure parts, where the target had consistently appeared, improved strongly with increased length of streaks of repetition of cue-part brightness. Learning in shifts of transient attention leads not only to faster attention shifts but to direct effects upon perception, in this case recognition of parts of figure-ground ambiguous cues.
The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands
ERIC Educational Resources Information Center
Diana, Rachel A.; Reder, Lynne M.
2006-01-01
Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative…
Face Processing and Facial Emotion Recognition in Adults with Down Syndrome
ERIC Educational Resources Information Center
Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial
2008-01-01
Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…
Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.
ERIC Educational Resources Information Center
Biederman, Irving; Cooper, Eric E.
1991-01-01
Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…
The relationships between trait anxiety, place recognition memory, and learning strategy.
Hawley, Wayne R; Grissom, Elin M; Dohanich, Gary P
2011-01-20
Rodents learn to navigate mazes using various strategies that are governed by specific regions of the brain. The type of strategy used when learning to navigate a spatial environment is moderated by a number of factors including emotional states. Heightened anxiety states, induced by exposure to stressors or administration of anxiogenic agents, have been found to bias male rats toward the use of a striatum-based stimulus-response strategy rather than a hippocampus-based place strategy. However, no study has yet examined the relationship between natural anxiety levels, or trait anxiety, and the type of learning strategy used by rats on a dual-solution task. In the current experiment, levels of inherent anxiety were measured in an open field and compared to performance on two separate cognitive tasks, a Y-maze task that assessed place recognition memory, and a visible platform water maze task that assessed learning strategy. Results indicated that place recognition memory on the Y-maze correlated with the use of place learning strategy on the water maze. Furthermore, lower levels of trait anxiety correlated positively with better place recognition memory and with the preferred use of place learning strategy. Therefore, competency in place memory and bias in place strategy are linked to the levels of inherent anxiety in male rats. Copyright © 2010 Elsevier B.V. All rights reserved.
Dissociation between facial and bodily expressions in emotion recognition: A case study.
Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo
2017-12-21
Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.
Cingulo-opercular activity affects incidental memory encoding for speech in noise.
Vaden, Kenneth I; Teubner-Rhodes, Susan; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A
2017-08-15
Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions. Copyright © 2017 Elsevier Inc. All rights reserved.
Top-down modulation of ventral occipito-temporal responses during visual word recognition.
Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T
2011-04-01
Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.
van Veluw, Susanne J; Chance, Steven A
2014-03-01
The perception of self and others is a key aspect of social cognition. In order to investigate the neurobiological basis of this distinction we reviewed two classes of task that study self-awareness and awareness of others (theory of mind, ToM). A reliable task to measure self-awareness is the recognition of one's own face in contrast to the recognition of others' faces. False-belief tasks are widely used to identify neural correlates of ToM as a measure of awareness of others. We performed an activation likelihood estimation meta-analysis, using the fMRI literature on self-face recognition and false-belief tasks. The brain areas involved in performing false-belief tasks were the medial prefrontal cortex (MPFC), bilateral temporo-parietal junction, precuneus, and the bilateral middle temporal gyrus. Distinct self-face recognition regions were the right superior temporal gyrus, the right parahippocampal gyrus, the right inferior frontal gyrus/anterior cingulate cortex, and the left inferior parietal lobe. Overlapping brain areas were the superior temporal gyrus, and the more ventral parts of the MPFC. We confirmed that self-recognition in contrast to recognition of others' faces, and awareness of others involves a network that consists of separate, distinct neural pathways, but also includes overlapping regions of higher order prefrontal cortex where these processes may be combined. Insights derived from the neurobiology of disorders such as autism and schizophrenia are consistent with this notion.
Familiarity and face emotion recognition in patients with schizophrenia.
Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto
2014-01-01
To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.
Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J
2017-01-01
In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cross, Laura; Brown, Malcolm W; Aggleton, John P; Warburton, E Clea
2012-12-21
In humans recognition memory deficits, a typical feature of diencephalic amnesia, have been tentatively linked to mediodorsal thalamic nucleus (MD) damage. Animal studies have occasionally investigated the role of the MD in single-item recognition, but have not systematically analyzed its involvement in other recognition memory processes. In Experiment 1 rats with bilateral excitotoxic lesions in the MD or the medial prefrontal cortex (mPFC) were tested in tasks that assessed single-item recognition (novel object preference), associative recognition memory (object-in-place), and recency discrimination (recency memory task). Experiment 2 examined the functional importance of the interactions between the MD and mPFC using disconnection techniques. Unilateral excitotoxic lesions were placed in both the MD and the mPFC in either the same (MD + mPFC Ipsi) or opposite hemispheres (MD + mPFC Contra group). Bilateral lesions in the MD or mPFC impaired object-in-place and recency memory tasks, but had no effect on novel object preference. In Experiment 2 the MD + mPFC Contra group was significantly impaired in the object-in-place and recency memory tasks compared with the MD + mPFC Ipsi group, but novel object preference was intact. Thus, connections between the MD and mPFC are critical for recognition memory when the discriminations involve associative or recency information. However, the rodent MD is not necessary for single-item recognition memory.
Recognition intent and visual word recognition.
Wang, Man-Ying; Ching, Chi-Le
2009-03-01
This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.
Interest and attention in facial recognition.
Burgess, Melinda C R; Weaver, George E
2003-04-01
When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.
Sully, K; Sonuga-Barke, E J S; Fairchild, G
2015-07-01
There is accumulating evidence of impairments in facial emotion recognition in adolescents with conduct disorder (CD). However, the majority of studies in this area have only been able to demonstrate an association, rather than a causal link, between emotion recognition deficits and CD. To move closer towards understanding the causal pathways linking emotion recognition problems with CD, we studied emotion recognition in the unaffected first-degree relatives of CD probands, as well as those with a diagnosis of CD. Using a family-based design, we investigated facial emotion recognition in probands with CD (n = 43), their unaffected relatives (n = 21), and healthy controls (n = 38). We used the Emotion Hexagon task, an alternative forced-choice task using morphed facial expressions depicting the six primary emotions, to assess facial emotion recognition accuracy. Relative to controls, the CD group showed impaired recognition of anger, fear, happiness, sadness and surprise (all p < 0.005). Similar to probands with CD, unaffected relatives showed deficits in anger and happiness recognition relative to controls (all p < 0.008), with a trend toward a deficit in fear recognition. There were no significant differences in performance between the CD probands and the unaffected relatives following correction for multiple comparisons. These results suggest that facial emotion recognition deficits are present in adolescents who are at increased familial risk for developing antisocial behaviour, as well as those who have already developed CD. Consequently, impaired emotion recognition appears to be a viable familial risk marker or candidate endophenotype for CD.
Moberly, Aaron C; Patel, Tirth R; Castellanos, Irina
2018-02-01
As a result of their hearing loss, adults with cochlear implants (CIs) would self-report poorer executive functioning (EF) skills than normal-hearing (NH) peers, and these EF skills would be associated with performance on speech recognition tasks. EF refers to a group of high order neurocognitive skills responsible for behavioral and emotional regulation during goal-directed activity, and EF has been found to be poorer in children with CIs than their NH age-matched peers. Moreover, there is increasing evidence that neurocognitive skills, including some EF skills, contribute to the ability to recognize speech through a CI. Thirty postlingually deafened adults with CIs and 42 age-matched NH adults were enrolled. Participants and their spouses or significant others (informants) completed well-validated self-reports or informant-reports of EF, the Behavior Rating Inventory of Executive Function - Adult (BRIEF-A). CI users' speech recognition skills were assessed in quiet using several measures of sentence recognition. NH peers were tested for recognition of noise-vocoded versions of the same speech stimuli. CI users self-reported difficulty on EF tasks of shifting and task monitoring. In CI users, measures of speech recognition correlated with several self-reported EF skills. The present findings provide further evidence that neurocognitive factors, including specific EF skills, may decline in association with hearing loss, and that some of these EF skills contribute to speech processing under degraded listening conditions.
Training improves reading speed in peripheral vision: is it due to attention?
Lee, Hye-Won; Kwon, Miyoung; Legge, Gordon E; Gefroh, Joshua J
2010-06-01
Previous research has shown that perceptual training in peripheral vision, using a letter-recognition task, increases reading speed and letter recognition (S. T. L. Chung, G. E. Legge, & S. H. Cheung, 2004). We tested the hypothesis that enhanced deployment of spatial attention to peripheral vision explains this training effect. Subjects were pre- and post-tested with 3 tasks at 10° above and below fixation-RSVP reading speed, trigram letter recognition (used to construct visual-span profiles), and deployment of spatial attention (measured as the benefit of a pre-cue for target position in a lexical-decision task). Groups of five normally sighted young adults received 4 days of trigram letter-recognition training in upper or lower visual fields, or central vision. A control group received no training. Our measure of deployment of spatial attention revealed visual-field anisotropies; better deployment of attention in the lower field than the upper, and in the lower-right quadrant compared with the other three quadrants. All subject groups exhibited slight improvement in deployment of spatial attention to peripheral vision in the post-test, but this improvement was not correlated with training-related increases in reading speed and the size of visual-span profiles. Our results indicate that improved deployment of spatial attention to peripheral vision does not account for improved reading speed and letter recognition in peripheral vision.
Guillaume, Fabrice; Etienne, Yann
2015-03-01
Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition. Copyright © 2015 Elsevier B.V. All rights reserved.
Muñoz, Pablo C; Aspé, Mauricio A; Contreras, Luis S; Palacios, Adrián G
2010-01-01
Object recognition memory allows discrimination between novel and familiar objects. This kind of memory consists of two components: recollection, which depends on the hippocampus, and familiarity, which depends on the perirhinal cortex (Pcx). The importance of brain-derived neurotrophic factor (BDNF) for recognition memory has already been recognized. Recent evidence suggests that DNA methylation regulates the expression of BDNF and memory. Behavioral and molecular approaches were used to understand the potential contribution of DNA methylation to recognition memory. To that end, rats were tested for their ability to distinguish novel from familiar objects by using a spontaneous object recognition task. Furthermore, the level of DNA methylation was estimated after trials with a methyl-sensitive PCR. We found a significant correlation between performance on the novel object task and the expression of BDNF, negatively in hippocampal slices and positively in perirhinal cortical slices. By contrast, methylation of DNA in CpG island 1 in the promoter of exon 1 in BDNF only correlated in hippocampal slices, but not in the Pxc cortical slices from trained animals. These results suggest that DNA methylation may be involved in the regulation of the BDNF gene during recognition memory, at least in the hippocampus.
Concept recognition for extracting protein interaction relations from biomedical text
Baumgartner, William A; Lu, Zhiyong; Johnson, Helen L; Caporaso, J Gregory; Paquette, Jesse; Lindemann, Anna; White, Elizabeth K; Medvedeva, Olga; Cohen, K Bretonnel; Hunter, Lawrence
2008-01-01
Background: Reliable information extraction applications have been a long sought goal of the biomedical text mining community, a goal that if reached would provide valuable tools to benchside biologists in their increasingly difficult task of assimilating the knowledge contained in the biomedical literature. We present an integrated approach to concept recognition in biomedical text. Concept recognition provides key information that has been largely missing from previous biomedical information extraction efforts, namely direct links to well defined knowledge resources that explicitly cement the concept's semantics. The BioCreative II tasks discussed in this special issue have provided a unique opportunity to demonstrate the effectiveness of concept recognition in the field of biomedical language processing. Results: Through the modular construction of a protein interaction relation extraction system, we present several use cases of concept recognition in biomedical text, and relate these use cases to potential uses by the benchside biologist. Conclusion: Current information extraction technologies are approaching performance standards at which concept recognition can begin to deliver high quality data to the benchside biologist. Our system is available as part of the BioCreative Meta-Server project and on the internet . PMID:18834500
Semantic Neighborhood Effects for Abstract versus Concrete Words
Danguecan, Ashley N.; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422
Semantic Neighborhood Effects for Abstract versus Concrete Words.
Danguecan, Ashley N; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.
Generalization between canonical and non-canonical views in object recognition
Ghose, Tandra; Liu, Zili
2013-01-01
Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692
Dennett, Hugh W; McKone, Elinor; Tavashmi, Raka; Hall, Ashleigh; Pidcock, Madeleine; Edwards, Mark; Duchaine, Bradley
2012-06-01
Many research questions require a within-class object recognition task matched for general cognitive requirements with a face recognition task. If the object task also has high internal reliability, it can improve accuracy and power in group analyses (e.g., mean inversion effects for faces vs. objects), individual-difference studies (e.g., correlations between certain perceptual abilities and face/object recognition), and case studies in neuropsychology (e.g., whether a prosopagnosic shows a face-specific or object-general deficit). Here, we present such a task. Our Cambridge Car Memory Test (CCMT) was matched in format to the established Cambridge Face Memory Test, requiring recognition of exemplars across view and lighting change. We tested 153 young adults (93 female). Results showed high reliability (Cronbach's alpha = .84) and a range of scores suitable both for normal-range individual-difference studies and, potentially, for diagnosis of impairment. The mean for males was much higher than the mean for females. We demonstrate independence between face memory and car memory (dissociation based on sex, plus a modest correlation between the two), including where participants have high relative expertise with cars. We also show that expertise with real car makes and models of the era used in the test significantly predicts CCMT performance. Surprisingly, however, regression analyses imply that there is an effect of sex per se on the CCMT that is not attributable to a stereotypical male advantage in car expertise.
Age-Related Effects of Stimulus Type and Congruency on Inattentional Blindness.
Liu, Han-Hui
2018-01-01
Background: Most of the previous inattentional blindness (IB) studies focused on the factors that contributed to the detection of unattended stimuli. The age-related changes on IB have rarely been investigated across all age groups. In the current study, by using the dual-task IB paradigm, we aimed to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. Methods: The current study recruited 111 participants (30 adolescents, 48 young adults, and 33 middle-aged adults) in the baseline recognition experiments and 341 participants (135 adolescents, 135 young adults, and 71 middle-aged adults) in the IB experiment. We applied the superimposed picture and word streams experimental paradigm to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. An ANOVA was performed to analyze the results. Results: Participants across all age groups presented significantly lower recognition scores for both pictures and words in comparison with baseline recognition. Participants presented decreased recognition for unattended pictures or words from adolescents to young adults and middle-aged adults. When the pictures and words are congruent, all the participants showed significantly higher recognition scores for unattended stimuli in comparison with incongruent condition. Adolescents and young adults did not show recognition differences when primary tasks were attending pictures or words. Conclusion: The current findings showed that all participants presented better recognition scores for attended stimuli in comparison with unattended stimuli, and the recognition scores decreased from the adolescents to young and middle-aged adults. The findings partly supported the attention capacity models of IB.
A predictive study of reading comprehension in third-grade Spanish students.
López-Escribano, Carmen; Elosúa de Juan, María Rosa; Gómez-Veiga, Isabel; García-Madruga, Juan Antonio
2013-01-01
The study of the contribution of language and cognitive skills to reading comprehension is an important goal of current reading research. However, reading comprehension is not easily assessed by a single instrument, as different comprehension tests vary in the type of tasks used and in the cognitive demands required. This study examines the contribution of basic language and cognitive skills (decoding, word recognition, reading speed, verbal and nonverbal intelligence and working memory) to reading comprehension, assessed by two tests utilizing various tasks that require different skill sets in third-grade Spanish-speaking students. Linguistic and cognitive abilities predicted reading comprehension. A measure of reading speed (the reading time of pseudo-words) was the best predictor of reading comprehension when assessed by the PROLEC-R test. However, measures of word recognition (the orthographic choice task) and verbal working memory were the best predictors of reading comprehension when assessed by means of the DARC test. These results show, on the one hand, that reading speed and word recognition are better predictors of Spanish language comprehension than reading accuracy. On the other, the reading comprehension test applied here serves as a critical variable when analyzing and interpreting results regarding this topic.
Two speed factors of visual recognition independently correlated with fluid intelligence.
Tachibana, Ryosuke; Namba, Yuri; Noguchi, Yasuki
2014-01-01
Growing evidence indicates a moderate but significant relationship between processing speed in visuo-cognitive tasks and general intelligence. On the other hand, findings from neuroscience proposed that the primate visual system consists of two major pathways, the ventral pathway for objects recognition and the dorsal pathway for spatial processing and attentive analysis. Previous studies seeking for visuo-cognitive factors of human intelligence indicated a significant correlation between fluid intelligence and the inspection time (IT), an index for a speed of object recognition performed in the ventral pathway. We thus presently examined a possibility that neural processing speed in the dorsal pathway also represented a factor of intelligence. Specifically, we used the mental rotation (MR) task, a popular psychometric measure for mental speed of spatial processing in the dorsal pathway. We found that the speed of MR was significantly correlated with intelligence scores, while it had no correlation with one's IT (recognition speed of visual objects). Our results support the new possibility that intelligence could be explained by two types of mental speed, one related to object recognition (IT) and another for manipulation of mental images (MR).
Emotion Recognition in Face and Body Motion in Bulimia Nervosa.
Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate
2017-11-01
Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
Recognition of emotion from body language among patients with unipolar depression
Loi, Felice; Vaidya, Jatin G.; Paradiso, Sergio
2013-01-01
Major depression may be associated with abnormal perception of emotions and impairment in social adaptation. Emotion recognition from body language and its possible implications to social adjustment have not been examined in patients with depression. Three groups of participants (51 with depression; 68 with history of depression in remission; and 69 never depressed healthy volunteers) were compared on static and dynamic tasks of emotion recognition from body language. Psychosocial adjustment was assessed using the Social Adjustment Scale Self-Report (SAS-SR). Participants with current depression showed reduced recognition accuracy for happy stimuli across tasks relative to remission and comparison participants. Participants with depression tended to show poorer psychosocial adaptation relative to remission and comparison groups. Correlations between perception accuracy of happiness and scores on the SAS-SR were largely not significant. These results indicate that depression is associated with reduced ability to appraise positive stimuli of emotional body language but emotion recognition performance is not tied to social adjustment. These alterations do not appear to be present in participants in remission suggesting state-like qualities. PMID:23608159
Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco
2018-06-01
This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.
Electrophysiological evidence for women superiority on unfamiliar face processing.
Sun, Tianyi; Li, Lin; Xu, Yuanli; Zheng, Li; Zhang, Weidong; Zhou, Fanzhi Anita; Guo, Xiuyan
2017-02-01
Previous research has reported that women superiority on face recognition tasks, taking sex difference in accuracy rates as major evidence. By appropriately modifying experimental tasks and examining reaction time as behavioral measure, it was possible to explore which stage of face processing contributes to womens' superiority. We used a modified delayed matching-to-sample task to investigate the time course characteristics of face recognition by ERP, for both men and women. In each trial, participants matched successively presented faces to samples (target faces) by key pressing. It was revealed that women were more accurate and faster than men on the task. ERP results showed that compared to men, women had shorter peak latencies of early components P100 and N170, as well as larger mean amplitude of the late positive component P300. Correlations between P300 mean amplitudes and RTs were found for both sexes. Besides, reaction times of women but not men were positively correlated with N170 latencies. In general, we provided further evidence for women superiority on face recognition in both behavioral and neural aspects. Copyright © 2016 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.
Recognition of chemical entities: combining dictionary-based and grammar-based approaches
2015-01-01
Background The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. Results The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. Conclusions We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named entity recognition, outperforming any of the individual systems that we considered. The system is able to provide structure information for most of the compounds that are found. Improved tokenization and better recognition of specific entity types is likely to further improve system performance. PMID:25810767
von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L
2015-04-01
Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r(2) = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.
2007-01-01
The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…
Acute Alcohol Effects on Repetition Priming and Word Recognition Memory with Equivalent Memory Cues
ERIC Educational Resources Information Center
Ray, Suchismita; Bates, Marsha E.
2006-01-01
Acute alcohol intoxication effects on memory were examined using a recollection-based word recognition memory task and a repetition priming task of memory for the same information without explicit reference to the study context. Memory cues were equivalent across tasks; encoding was manipulated by varying the frequency of occurrence (FOC) of words…
Biologically inspired emotion recognition from speech
NASA Astrophysics Data System (ADS)
Caponetti, Laura; Buscicchio, Cosimo Alessandro; Castellano, Giovanna
2011-12-01
Emotion recognition has become a fundamental task in human-computer interaction systems. In this article, we propose an emotion recognition approach based on biologically inspired methods. Specifically, emotion classification is performed using a long short-term memory (LSTM) recurrent neural network which is able to recognize long-range dependencies between successive temporal patterns. We propose to represent data using features derived from two different models: mel-frequency cepstral coefficients (MFCC) and the Lyon cochlear model. In the experimental phase, results obtained from the LSTM network and the two different feature sets are compared, showing that features derived from the Lyon cochlear model give better recognition results in comparison with those obtained with the traditional MFCC representation.
Zeijlmans van Emmichoven, Ingeborg A; van IJzendoorn, Marinus H; de Ruiter, Corine; Brosschot, Jos F
2003-01-01
To investigate the effect of the mental representation of attachment on information processing, 28 anxiety disorder outpatients, as diagnosed by the Anxiety Disorders Interview Schedule-Revised, were administered the Adult Attachment Interview and the State-Trait Anxiety Inventory. They also completed an emotional Stroop task with subliminal and supraliminal exposure conditions, a free recall memory task, and a recognition test. All tasks contained threatening, neutral, and positively valenced stimuli. A nonclinical comparison group of 56 participants completed the same measures. Results on the Stroop task showed color-naming interference for threatening words in the supraliminal condition only. Nonclinical participants with insecure attachment representations showed a global response inhibition to the Stroop task. Clinical participants with secure attachment representations showed the largest Stroop interference of the threatening words compared to the other groups. Results on the free recall task showed superior recall of all types of stimuli by participants with secure attachment representations. In the outpatient group, participants with secure attachment representations showed superior recall of threatening words on the free recall task, compared to insecure participants. Results on the recognition task showed no differences between attachment groups. We conclude that secure attachment representations are characterized by open communication about and processing of threatening information, leading to less defensive exclusion of negative material during the attentional stage of information processing and to better recall of threatening information in a later stage. Attachment insecurity, but not the type of insecurity, seems a decisive factor in attention and memory processes.
Kreitewolf, Jens; Friederici, Angela D; von Kriegstein, Katharina
2014-11-15
Hemispheric specialization for linguistic prosody is a controversial issue. While it is commonly assumed that linguistic prosody and emotional prosody are preferentially processed in the right hemisphere, neuropsychological work directly comparing processes of linguistic prosody and emotional prosody suggests a predominant role of the left hemisphere for linguistic prosody processing. Here, we used two functional magnetic resonance imaging (fMRI) experiments to clarify the role of left and right hemispheres in the neural processing of linguistic prosody. In the first experiment, we sought to confirm previous findings showing that linguistic prosody processing compared to other speech-related processes predominantly involves the right hemisphere. Unlike previous studies, we controlled for stimulus influences by employing a prosody and speech task using the same speech material. The second experiment was designed to investigate whether a left-hemispheric involvement in linguistic prosody processing is specific to contrasts between linguistic prosody and emotional prosody or whether it also occurs when linguistic prosody is contrasted against other non-linguistic processes (i.e., speaker recognition). Prosody and speaker tasks were performed on the same stimulus material. In both experiments, linguistic prosody processing was associated with activity in temporal, frontal, parietal and cerebellar regions. Activation in temporo-frontal regions showed differential lateralization depending on whether the control task required recognition of speech or speaker: recognition of linguistic prosody predominantly involved right temporo-frontal areas when it was contrasted against speech recognition; when contrasted against speaker recognition, recognition of linguistic prosody predominantly involved left temporo-frontal areas. The results show that linguistic prosody processing involves functions of both hemispheres and suggest that recognition of linguistic prosody is based on an inter-hemispheric mechanism which exploits both a right-hemispheric sensitivity to pitch information and a left-hemispheric dominance in speech processing. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Cazzell, Samantha; Skinner, Christopher H.; Ciancio, Dennis; Aspiranti, Kathleen; Watson, Tiffany; Taylor, Kala; McCurdy, Merilee; Skinner, Amy
2017-01-01
A concurrent multiple-baseline across-tasks design was used to evaluate the effectiveness of a computer flash-card sight-word recognition intervention with elementary-school students with intellectual disability. This intervention allowed the participants to self-determine each response interval and resulted in both participants acquiring…
Saneyoshi, Ayako; Michimata, Chikashi
2009-12-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.
Arginine Vasopressin selectively enhances recognition of sexual cues in male humans.
Guastella, Adam J; Kenyon, Amanda R; Unkelbach, Christian; Alvares, Gail A; Hickie, Ian B
2011-02-01
Arginine Vasopressin modulates complex social and sexual behavior by enhancing social recognition, pair bonding, and aggression in non-human mammals. The influence of Arginine Vasopressin in human social and sexual behavior is, however, yet to be fully understood. We evaluated whether Arginine Vasopressin nasal spray facilitated recognition of positive and negative social and sexual stimuli over non-social stimuli. We used a recognition task that has already been shown to be sensitive to the influence of Oxytocin nasal spray (Unkelbach et al., 2008). In a double-blind, randomized, placebo-controlled, between-subjects design, 41 healthy male volunteers were administered Arginine Vasopressin (20 IU) or a placebo nasal spray after a 45 min wait period and then completed the recognition task. Results showed that the participants administered Arginine Vasopressin nasal spray were faster to detect sexual words over other types of words. This effect appeared for both positively and negatively valenced words. Results demonstrate for the first time that Arginine Vasopressin selectively enhances human cognition for sexual stimuli, regardless of valence. They further extend animal and human genetic studies linking Arginine Vasopressin to sexual behavior in males. Findings suggest an important cognitive mechanism that could enhance sexual behaviors in humans. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
Test battery for measuring the perception and recognition of facial expressions of emotion
Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner
2014-01-01
Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528
The processing of auditory and visual recognition of self-stimuli.
Hughes, Susan M; Nicholson, Shevon E
2010-12-01
This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.
Chen, Y C; Huang, F D; Chen, N H; Shou, J Y; Wu, L
1998-04-01
In the last 2-3 decades the role of the premotor cortex (PM) of monkey in memorized spatial sequential (MSS) movements has been amply investigated. However, it is as yet not known whether PM participates in the movement sequence behaviour guided by recognition of visual figures (i.e. the figure-recognition sequence, FRS). In the present work three monkeys were trained to perform both FRS and MSS tasks. Postmortem examination showed that 202 cells were in the dorso-lateral premotor cortex. Among 111 cells recorded during the two tasks, more than 50% changed their activity during the cue periods in either task. During the response period, the ratios of cells with changes of firing rate in both FRS and MSS were high and roughly equal to each other, while during the image period, the proportion in the FRS (83.7%) was significantly higher than that in the MSS (66.7%). Comparison of neuronal activities during same motor sequence of two different tasks showed that during the image periods PM neuronal activities were more closely related to the FRS task, while during the cue periods no difference could be found. Analysis of cell responses showed that the neurons with longer latency were much more in MSS than in FRS in either cue or image period. The present results indicate that the premotor cortex participates in FRS motor sequence as well as in MSS and suggest that the dorso-lateral PM represents another subarea in function shared by both FRS and MSS tasks. However, in view of the differences of PM neuronal responses in cue or image periods of FRS and MSS tasks, it seems likely that neural networks involved in FRS and MSS tasks are different.
Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor
NASA Astrophysics Data System (ADS)
Heracleous, Panikos; Kaino, Tomomi; Saruwatari, Hiroshi; Shikano, Kiyohiro
2006-12-01
We present the use of stethoscope and silicon NAM (nonaudible murmur) microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible) speech, but also very quietly uttered speech (nonaudible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc.) for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a[InlineEquation not available: see fulltext.] word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.
Cox, Gregory E; Hemmer, Pernille; Aue, William R; Criss, Amy H
2018-04-01
The development of memory theory has been constrained by a focus on isolated tasks rather than the processes and information that are common to situations in which memory is engaged. We present results from a study in which 453 participants took part in five different memory tasks: single-item recognition, associative recognition, cued recall, free recall, and lexical decision. Using hierarchical Bayesian techniques, we jointly analyzed the correlations between tasks within individuals-reflecting the degree to which tasks rely on shared cognitive processes-and within items-reflecting the degree to which tasks rely on the same information conveyed by the item. Among other things, we find that (a) the processes involved in lexical access and episodic memory are largely separate and rely on different kinds of information, (b) access to lexical memory is driven primarily by perceptual aspects of a word, (c) all episodic memory tasks rely to an extent on a set of shared processes which make use of semantic features to encode both single words and associations between words, and (d) recall involves additional processes likely related to contextual cuing and response production. These results provide a large-scale picture of memory across different tasks which can serve to drive the development of comprehensive theories of memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Lilienthal, Lindsey; Rose, Nathan S.; Tamez, Elaine; Myerson, Joel; Hale, Sandra
2014-01-01
Although individuals with high and low working memory (WM) span appear to differ in the extent to which irrelevant information interferes with their performance on WM tasks, the locus of this interference is not clear. The present study investigated whether, when performing a WM task, high- and low-span individuals differ in the activation of formerly relevant, but now irrelevant items, and/or in their ability to correctly identify such irrelevant items. This was done in two experiments, both of which used modified complex WM span tasks. In Experiment 1, the span task included an embedded lexical decision task designed to obtain an implicit measure of the activation of both currently and formerly relevant items. In Experiment 2, the span task included an embedded recognition judgment task designed to obtain an explicit measure of both item and source recognition ability. The results of these experiments indicate that low-span individuals do not hold irrelevant information in a more active state in memory than high-span individuals, but rather that low-span individuals are significantly poorer at identifying such information as irrelevant at the time of retrieval. These results suggest that differences in the ability to monitor the source of information, rather than differences in the activation of irrelevant information, are the more important determinant of performance on WM tasks. PMID:25921723
The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words
Hoedemaker, Renske S.; Gordon, Peter C.
2016-01-01
In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394
The onset and time course of semantic priming during rapid recognition of visual words.
Hoedemaker, Renske S; Gordon, Peter C
2017-05-01
In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
How does creating a concept map affect item-specific encoding?
Grimaldi, Phillip J; Poston, Laurel; Karpicke, Jeffrey D
2015-07-01
Concept mapping has become a popular learning tool. However, the processes underlying the task are poorly understood. In the present study, we examined the effect of creating a concept map on the processing of item-specific information. In 2 experiments, subjects learned categorized or ad hoc word lists by making pleasantness ratings, sorting words into categories, or creating a concept map. Memory was tested using a free recall test and a recognition memory test, which is considered to be especially sensitive to item-specific processing. Typically, tasks that promote item-specific processing enhance free recall of categorized lists, relative to category sorting. Concept mapping resulted in lower recall performance than both the pleasantness rating and category sorting condition for categorized words. Moreover, concept mapping resulted in lower recognition memory performance than the other 2 tasks. These results converge on the conclusion that creating a concept map disrupts the processing of item-specific information. (c) 2015 APA, all rights reserved.
A top-down manner-based DCNN architecture for semantic image segmentation.
Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin
2017-01-01
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.
Integrated Low-Rank-Based Discriminative Feature Learning for Recognition.
Zhou, Pan; Lin, Zhouchen; Zhang, Chao
2016-05-01
Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-based approach for learning discriminative features. By integrating latent low-rank representation (LatLRR) with a ridge regression-based classifier, our approach combines feature learning with classification, so that the regulated classification error is minimized. In this way, the extracted features are more discriminative for the recognition tasks. Our approach benefits from a recent discovery on the closed-form solutions to noiseless LatLRR. When there is noise, a robust Principal Component Analysis (PCA)-based denoising step can be added as preprocessing. When the scale of a problem is large, we utilize a fast randomized algorithm to speed up the computation of robust PCA. Extensive experimental results demonstrate the effectiveness and robustness of our method.
Mechanisms and neural basis of object and pattern recognition: a study with chess experts.
Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang
2010-11-01
Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.
Hawley, Wayne R; Grissom, Elin M; Moody, Nicole M; Dohanich, Gary P; Vasudevan, Nandini
2014-04-01
In ovariectomized rats, administration of estradiol, or selective estrogen receptor agonists that activate either the α or β isoforms, have been shown to enhance spatial cognition on a variety of learning and memory tasks, including those that capitalize on the preference of rats to seek out novelty. Although the effects of the putative estrogen G-protein-coupled receptor 30 (GPR30) on hippocampus-based tasks have been reported using food-motivated tasks, the effects of activation of GPR30 receptors on tasks that depend on the preference of rats to seek out spatial novelty remain to be determined. Therefore, the aim of the current study was to determine if short-term treatment of ovariectomized rats with G-1, an agonist for GPR30, would mimic the effects on spatial recognition memory observed following short-term estradiol treatment. In Experiment 1, ovariectomized rats treated with a low dose (1 μg) of estradiol 48 h and 24 h prior to the information trial of a Y-maze task exhibited a preference for the arm associated with the novel environment on the retention trial conducted 48 h later. In Experiment 2, treatment of ovariectomized rats with G-1 (25 μg) 48 h and 24 h prior to the information trial of a Y-maze task resulted in a greater preference for the arm associated with the novel environment on the retention trial. Collectively, the results indicated that short-term treatment of ovariectomized rats with a GPR30 agonist was sufficient to enhance spatial recognition memory, an effect that also occurred following short-term treatment with a low dose of estradiol. Copyright © 2014 Elsevier B.V. All rights reserved.
Gardiner, John M; Brandt, Karen R; Vargha-Khadem, Faraneh; Baddeley, Alan; Mishkin, Mortimer
2006-09-01
We report the performance in four recognition memory experiments of Jon, a young adult with early-onset developmental amnesia whose episodic memory is gravely impaired in tests of recall, but seems relatively preserved in tests of recognition, and who has developed normal levels of performance in tests of intelligence and general knowledge. Jon's recognition performance was enhanced by deeper levels of processing in comparing a more meaningful study task with a less meaningful one, but not by task enactment in comparing performance of an action with reading an action phrase. Both of these variables normally enhance episodic remembering, which Jon claimed to experience. But Jon was unable to support that claim by recollecting what it was that he remembered. Taken altogether, the findings strongly imply that Jon's recognition performance entailed little genuine episodic remembering and that the levels-of-processing effects in Jon reflected semantic, not episodic, memory.
Gasperini, Filippo; Brizzolara, Daniela; Cristofani, Paola; Casalini, Claudia; Chilosi, Anna Maria
2014-01-01
Children with Developmental Dyslexia (DD) are impaired in Rapid Automatized Naming (RAN) tasks, where subjects are asked to name arrays of high frequency items as quickly as possible. However the reasons why RAN speed discriminates DD from typical readers are not yet fully understood. Our study was aimed to identify some of the cognitive mechanisms underlying RAN-reading relationship by comparing one group of 32 children with DD with an age-matched control group of typical readers on a naming and a visual recognition task both using a discrete-trial methodology, in addition to a serial RAN task, all using the same stimuli (digits and colors). Results showed a significant slowness of DD children in both serial and discrete-trial naming (DN) tasks regardless of type of stimulus, but no difference between the two groups on the discrete-trial recognition task. Significant differences between DD and control participants in the RAN task disappeared when performance in the DN task was partialled out by covariance analysis for colors, but not for digits. The same pattern held in a subgroup of DD subjects with a history of early language delay (LD). By contrast, in a subsample of DD children without LD the RAN deficit was specific for digits and disappeared after slowness in DN was partialled out. Slowness in DN was more evident for LD than for noLD DD children. Overall, our results confirm previous evidence indicating a name-retrieval deficit as a cognitive impairment underlying RAN slowness in DD children. This deficit seems to be more marked in DD children with previous LD. Moreover, additional cognitive deficits specifically associated with serial RAN tasks have to be taken into account when explaining deficient RAN speed of these latter children. We suggest that partially different cognitive dysfunctions underpin superficially similar RAN impairments in different subgroups of DD subjects. PMID:25237301
Pergolizzi, Denise; Chua, Elizabeth F
2016-10-01
Neuroimaging data have shown that activity in the lateral posterior parietal cortex (PPC) correlates with item recognition and source recollection, but there is considerable debate about its specific contributions. Performance on both item and source memory tasks were compared between participants who were given bilateral transcranial direct current stimulation (tDCS) over the parietal cortex to those given prefrontal or sham tDCS. The parietal tDCS group, but not the prefrontal group, showed decreased false recognition, and less bias in item and source discrimination tasks compared to sham stimulation. These results are consistent with a causal role of the PPC in item and source memory retrieval, likely based on attentional and decision-making biases. Copyright © 2016 Elsevier Inc. All rights reserved.
Predicting reasoning from memory.
Heit, Evan; Hayes, Brett K
2011-02-01
In an effort to assess the relations between reasoning and memory, in 8 experiments, the authors examined how well responses on an inductive reasoning task are predicted from responses on a recognition memory task for the same picture stimuli. Across several experimental manipulations, such as varying study time, presentation frequency, and the presence of stimuli from other categories, there was a high correlation between reasoning and memory responses (average r = .87), and these manipulations showed similar effects on the 2 tasks. The results point to common mechanisms underlying inductive reasoning and recognition memory abilities. A mathematical model, GEN-EX (generalization from examples), derived from exemplar models of categorization, is presented, which predicts both reasoning and memory responses from pairwise similarities among the stimuli, allowing for additional influences of subtyping and deterministic responding. (c) 2010 APA, all rights reserved.
Clark, Steven E; Abbe, Allison; Larson, Rakel P
2006-11-01
S. E. Clark, A. Hori, A. Putnam, and T. J. Martin (2000) showed that collaboration on a recognition memory task produced facilitation in recognition of targets but had inconsistent and sometimes negative effects regarding distractors. They accounted for these results within the framework of a dual-process, recall-plus-familiarity model but showed only weak evidence to support it. The present results of 3 experiments present stronger evidence for Clark et al.'s dual-process view and also show why such evidence is difficult to obtain. Copyright 2006 APA, all rights reserved.
Schultebraucks, Katharina; Deuter, Christian E; Duesenberg, Moritz; Schulze, Lars; Hellmann-Regen, Julian; Domke, Antonia; Lockenvitz, Lisa; Kuehl, Linn K; Otte, Christian; Wingenfeld, Katja
2016-09-01
Selective attention toward emotional cues and emotion recognition of facial expressions are important aspects of social cognition. Stress modulates social cognition through cortisol, which acts on glucocorticoid (GR) and mineralocorticoid receptors (MR) in the brain. We examined the role of MR activation on attentional bias toward emotional cues and on emotion recognition. We included 40 healthy young women and 40 healthy young men (mean age 23.9 ± 3.3), who either received 0.4 mg of the MR agonist fludrocortisone or placebo. A dot-probe paradigm was used to test for attentional biases toward emotional cues (happy and sad faces). Moreover, we used a facial emotion recognition task to investigate the ability to recognize emotional valence (anger and sadness) from facial expression in four graded categories of emotional intensity (20, 30, 40, and 80 %). In the emotional dot-probe task, we found a main effect of treatment and a treatment × valence interaction. Post hoc analyses revealed an attentional bias away from sad faces after placebo intake and a shift in selective attention toward sad faces compared to placebo. We found no attentional bias toward happy faces after fludrocortisone or placebo intake. In the facial emotion recognition task, there was no main effect of treatment. MR stimulation seems to be important in modulating quick, automatic emotional processing, i.e., a shift in selective attention toward negative emotional cues. Our results confirm and extend previous findings of MR function. However, we did not find an effect of MR stimulation on emotion recognition.
Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized
Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.
2012-01-01
Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051
Emotion effects on implicit and explicit musical memory in normal aging.
Narme, Pauline; Peretz, Isabelle; Strub, Marie-Laure; Ergis, Anne-Marie
2016-12-01
Normal aging affects explicit memory while leaving implicit memory relatively spared. Normal aging also modifies how emotions are processed and experienced, with increasing evidence that older adults (OAs) focus more on positive information than younger adults (YAs). The aim of the present study was to investigate how age-related changes in emotion processing influence explicit and implicit memory. We used emotional melodies that differed in terms of valence (positive or negative) and arousal (high or low). Implicit memory was assessed with a preference task exploiting exposure effects, and explicit memory with a recognition task. Results indicated that effects of valence and arousal interacted to modulate both implicit and explicit memory in YAs. In OAs, recognition was poorer than in YAs; however, recognition of positive and high-arousal (happy) studied melodies was comparable. Insofar as socioemotional selectivity theory (SST) predicts a preservation of the recognition of positive information, our findings are not fully consistent with the extension of this theory to positive melodies since recognition of low-arousal (peaceful) studied melodies was poorer in OAs. In the preference task, YAs showed stronger exposure effects than OAs, suggesting an age-related decline of implicit memory. This impairment is smaller than the one observed for explicit memory (recognition), extending to the musical domain the dissociation between explicit memory decline and implicit memory relative preservation in aging. Finally, the disproportionate preference for positive material seen in OAs did not translate into stronger exposure effects for positive material suggesting no age-related emotional bias in implicit memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Ease of identifying words degraded by visual noise.
Barber, P; de la Mahotière, C
1982-08-01
A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.
Bilingual Language Switching: Production vs. Recognition
Mosca, Michela; de Bot, Kees
2017-01-01
This study aims at assessing how bilinguals select words in the appropriate language in production and recognition while minimizing interference from the non-appropriate language. Two prominent models are considered which assume that when one language is in use, the other is suppressed. The Inhibitory Control (IC) model suggests that, in both production and recognition, the amount of inhibition on the non-target language is greater for the stronger compared to the weaker language. In contrast, the Bilingual Interactive Activation (BIA) model proposes that, in language recognition, the amount of inhibition on the weaker language is stronger than otherwise. To investigate whether bilingual language production and recognition can be accounted for by a single model of bilingual processing, we tested a group of native speakers of Dutch (L1), advanced speakers of English (L2) in a bilingual recognition and production task. Specifically, language switching costs were measured while participants performed a lexical decision (recognition) and a picture naming (production) task involving language switching. Results suggest that while in language recognition the amount of inhibition applied to the non-appropriate language increases along with its dominance as predicted by the IC model, in production the amount of inhibition applied to the non-relevant language is not related to language dominance, but rather it may be modulated by speakers' unconscious strategies to foster the weaker language. This difference indicates that bilingual language recognition and production might rely on different processing mechanisms and cannot be accounted within one of the existing models of bilingual language processing. PMID:28638361
Bilingual Language Switching: Production vs. Recognition.
Mosca, Michela; de Bot, Kees
2017-01-01
This study aims at assessing how bilinguals select words in the appropriate language in production and recognition while minimizing interference from the non-appropriate language. Two prominent models are considered which assume that when one language is in use, the other is suppressed. The Inhibitory Control (IC) model suggests that, in both production and recognition, the amount of inhibition on the non-target language is greater for the stronger compared to the weaker language. In contrast, the Bilingual Interactive Activation (BIA) model proposes that, in language recognition, the amount of inhibition on the weaker language is stronger than otherwise. To investigate whether bilingual language production and recognition can be accounted for by a single model of bilingual processing, we tested a group of native speakers of Dutch (L1), advanced speakers of English (L2) in a bilingual recognition and production task. Specifically, language switching costs were measured while participants performed a lexical decision (recognition) and a picture naming (production) task involving language switching. Results suggest that while in language recognition the amount of inhibition applied to the non-appropriate language increases along with its dominance as predicted by the IC model, in production the amount of inhibition applied to the non-relevant language is not related to language dominance, but rather it may be modulated by speakers' unconscious strategies to foster the weaker language. This difference indicates that bilingual language recognition and production might rely on different processing mechanisms and cannot be accounted within one of the existing models of bilingual language processing.
Impairment of nonverbal recognition in Alzheimer disease: a PET O-15 study.
Anderson, K E; Brickman, A M; Flynn, J; Scarmeas, N; Van Heertum, R; Sackeim, H; Marder, K S; Bell, K; Moeller, J R; Stern, Y
2007-07-03
To characterize deficits in nonverbal recognition memory and functional brain changes associated with these deficits in Alzheimer disease (AD). Using O-15 PET, we studied 11 patients with AD and 17 cognitively intact elders during the combined encoding and retrieval periods of a nonverbal recognition task. Both task conditions involved recognition of line drawings of abstract shapes. In both conditions, subjects were first presented a list of shapes as study items, and then a list as test items, containing items from the study list and foils. In the titrated demand condition, the shape study list size (SLS) was adjusted prior to imaging so that each subject performed at approximately 75% recognition accuracy; difficulty during PET scanning in this condition was approximately matched across subjects. A control task was used in which SLS = 1 shape. During performance of the titrated demand condition, SLS averaged 4.55 (+/-1.86) shapes for patients with AD and 7.53 (+/-4.81) for healthy elderly subjects (p = 0.031). However, both groups of subjects were closely matched on performance in the titrated demand condition during PET scanning with 72.17% (+/-7.98%) correct for patients with AD and 72.25% (+/-7.03%) for elders (p = 0.979). PET results demonstrated that patients with AD showed greater mean differences between the titrated demand condition and control in areas including the left fusiform and inferior frontal regions (Brodmann areas 19 and 45). Relative fusiform and inferior frontal differences may reflect the Alzheimer disease (AD) patients' compensatory engagement of alternate brain regions. The strategy used by patients with AD is likely to be a general mechanism of compensation, rather than task-specific.
Choudhury, Naseem; Leppanen, Paavo H.T.; Leevers, Hilary J.; Benasich, April A.
2007-01-01
An infant’s ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this study is to determine if performance on infant information processing measures designed to tap RAP and global processing skills differ as a function of family history of specific language impairment (SLI) and/or the particular demand characteristics of the paradigm used. Seventeen 6- to 9-month-old infants from families with a history of specific language impairment (FH+) and 29 control infants (FH−) participated in this study. Infants’ performance on two different RAP paradigms (head-turn procedure [HT] and auditory-visual habituation/recognition memory [AVH/RM]) and on a global processing task (visual habituation/recognition memory [VH/RM]) was assessed at 6 and 9 months. Toddler language and cognitive skills were evaluated at 12 and 16 months. A number of significant group differences were seen: FH+ infants showed significantly poorer discrimination of fast rate stimuli on both RAP tasks, took longer to habituate on both habituation/recognition memory measures, and had lower novelty preference scores on the visual habituation/recognition memory task. Infants’ performance on the two RAP measures provided independent but converging contributions to outcome. Thus, different mechanisms appear to underlie performance on operantly conditioned tasks as compared to habituation/recognition memory paradigms. Further, infant RAP processing abilities predicted to 12- and 16-month language scores above and beyond family history of SLI. The results of this study provide additional support for the validity of infant RAP abilities as a behavioral marker for later language outcome. Finally, this is the first study to use a battery of infant tasks to demonstrate multi-modal processing deficits in infants at risk for SLI. PMID:17286846
Holding, Benjamin C; Laukka, Petri; Fischer, Håkan; Bänziger, Tanja; Axelsson, John; Sundelin, Tina
2017-11-01
Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Han, Ren-Wen; Zhang, Rui-San; Xu, Hong-Jiao; Chang, Min; Peng, Ya-Li; Wang, Rui
2013-07-01
Neuropeptide S (NPS), the endogenous ligand of NPSR, has been shown to promote arousal and anxiolytic-like effects. According to the predominant distribution of NPSR in brain tissues associated with learning and memory, NPS has been reported to modulate cognitive function in rodents. Here, we investigated the role of NPS in memory formation, and determined whether NPS could mitigate memory impairment induced by selective N-methyl-D-aspartate receptor antagonist MK801, muscarinic cholinergic receptor antagonist scopolamine or Aβ₁₋₄₂ in mice, using novel object and object location recognition tasks. Intracerebroventricular (i.c.v.) injection of 1 nmol NPS 5 min after training not only facilitated object recognition memory formation, but also prolonged memory retention in both tasks. The improvement of object recognition memory induced by NPS could be blocked by the selective NPSR antagonist SHA 68, indicating pharmacological specificity. Then, we found that i.c.v. injection of NPS reversed memory disruption induced by MK801, scopolamine or Aβ₁₋₄₂ in both tasks. In summary, our results indicate that NPS facilitates memory formation and prolongs the retention of memory through activation of the NPSR, and mitigates amnesia induced by blockage of glutamatergic or cholinergic system or by Aβ₁₋₄₂, suggesting that NPS/NPSR system may be a new target for enhancing memory and treating amnesia. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sentence Verification, Sentence Recognition, and the Semantic-Episodic Distinction
ERIC Educational Resources Information Center
Shoben, Edward J.; And Others
1978-01-01
In an attempt to assess the validity of the distinction between episodic and semantic memory, this research examined the influence of two variables on sentence verification (presumably a semantic memory task) and sentence recognition (presumably an episodic memory task). ( Editor)
Early prediction of student goals and affect in narrative-centered learning environments
NASA Astrophysics Data System (ADS)
Lee, Sunyoung
Recent years have seen a growing recognition of the role of goal and affect recognition in intelligent tutoring systems. Goal recognition is the task of inferring users' goals from a sequence of observations of their actions. Because of the uncertainty inherent in every facet of human computer interaction, goal recognition is challenging, particularly in contexts in which users can perform many actions in any order, as is the case with intelligent tutoring systems. Affect recognition is the task of identifying the emotional state of a user from a variety of physical cues, which are produced in response to affective changes in the individual. Accurately recognizing student goals and affect states could contribute to more effective and motivating interactions in intelligent tutoring systems. By exploiting knowledge of student goals and affect states, intelligent tutoring systems can dynamically modify their behavior to better support individual students. To create effective interactions in intelligent tutoring systems, goal and affect recognition models should satisfy two key requirements. First, because incorrectly predicted goals and affect states could significantly diminish the effectiveness of interactive systems, goal and affect recognition models should provide accurate predictions of user goals and affect states. When observations of users' activities become available, recognizers should make accurate early" predictions. Second, goal and affect recognition models should be highly efficient so they can operate in real time. To address key issues, we present an inductive approach to recognizing student goals and affect states in intelligent tutoring systems by learning goals and affect recognition models. Our work focuses on goal and affect recognition in an important new class of intelligent tutoring systems, narrative-centered learning environments. We report the results of empirical studies of induced recognition models from observations of students' interactions in narrative-centered learning environments. Experimental results suggest that induced models can make accurate early predictions of student goals and affect states, and they are sufficiently efficient to meet the real-time performance requirements of interactive learning environments.
Recognition of oral spelling is diagnostic of the central reading processes.
Schubert, Teresa; McCloskey, Michael
2015-01-01
The task of recognition of oral spelling (stimulus: "C-A-T", response: "cat") is often administered to individuals with acquired written language disorders, yet there is no consensus about the underlying cognitive processes. We adjudicate between two existing hypotheses: Recognition of oral spelling uses central reading processes, or recognition of oral spelling uses central spelling processes in reverse. We tested the recognition of oral spelling and spelling to dictation abilities of a single individual with acquired dyslexia and dysgraphia. She was impaired relative to matched controls in spelling to dictation but unimpaired in recognition of oral spelling. Recognition of oral spelling for exception words (e.g., colonel) and pronounceable nonwords (e.g., larth) was intact. Our results were predicted by the hypothesis that recognition of oral spelling involves the central reading processes. We conclude that recognition of oral spelling is a useful tool for probing the integrity of the central reading processes.
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Repetition priming of face recognition in a serial choice reaction-time task.
Roberts, T; Bruce, V
1989-05-01
Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.
Selective involvement of superior frontal cortex during working memory for shapes.
Yee, Lydia T S; Roe, Katherine; Courtney, Susan M
2010-01-01
A spatial/nonspatial functional dissociation between the dorsal and ventral visual pathways is well established and has formed the basis of domain-specific theories of prefrontal cortex (PFC). Inconsistencies in the literature regarding prefrontal organization, however, have led to questions regarding whether the nature of the dissociations observed in PFC during working memory are equivalent to those observed in the visual pathways for perception. In particular, the dissociation between dorsal and ventral PFC during working memory for locations versus object identities has been clearly present in some studies but not in others, seemingly in part due to the type of objects used. The current study compared functional MRI activation during delayed-recognition tasks for shape or color, two object features considered to be processed by the ventral pathway for perceptual recognition. Activation for the shape-delayed recognition task was greater than that for the color task in the lateral occipital cortex, in agreement with studies of visual perception. Greater memory-delay activity was also observed, however, in the parietal and superior frontal cortices for the shape than for the color task. Activity in superior frontal cortex was associated with better performance on the shape task. Conversely, greater delay activity for color than for shape was observed in the left anterior insula and this activity was associated with better performance on the color task. These results suggest that superior frontal cortex contributes to performance on tasks requiring working memory for object identities, but it represents different information about those objects than does the ventral frontal cortex.
Practice makes imperfect: Working memory training can harm recognition memory performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Trumbo, Michael C.; Haass, Michael J.
There is a great deal of debate concerning the benefits of working memory (WM) training and whether that training can transfer to other tasks. Although a consistent finding is that WM training programs elicit a short-term near-transfer effect (i.e., improvement in WM skills), results are inconsistent when considering persistence of such improvement and far transfer effects. In this study, we compared three groups of participants: a group that received WM training, a group that received training on how to use a mental imagery memory strategy, and a control group that received no training. Although the WM training group improved onmore » the trained task, their posttraining performance on nontrained WM tasks did not differ from that of the other two groups. In addition, although the imagery training group’s performance on a recognition memory task increased after training, the WM training group’s performance on the task decreased after training. Participants’ descriptions of the strategies they used to remember the studied items indicated that WM training may lead people to adopt memory strategies that are less effective for other types of memory tasks. Our results indicate that WM training may have unintended consequences for other types of memory performance.« less
Tryptophan depletion decreases the recognition of fear in female volunteers.
Harmer, C J; Rogers, R D; Tunbridge, E; Cowen, P J; Goodwin, G M
2003-06-01
Serotonergic processes have been implicated in the modulation of fear conditioning in humans, postulated to occur at the level of the amygdala. The processing of other fear-relevant cues, such as facial expressions, has also been associated with amygdala function, but an effect of serotonin depletion on these processes has not been assessed. The present study investigated the effects of reducing serotonin function, using acute tryptophan depletion, on the recognition of basic facial expressions of emotions in healthy male and female volunteers. A double-blind between-groups design was used, with volunteers being randomly allocated to receive an amino acid drink specifically lacking tryptophan or a control mixture containing a balanced mixture of these amino acids. Participants were given a facial expression recognition task 5 h after drink administration. This task featured examples of six basic emotions (fear, anger, disgust, surprise, sadness and happiness) that had been morphed between each full emotion and neutral in 10% steps. As a control, volunteers were given a famous face classification task matched in terms of response selection and difficulty level. Tryptophan depletion significantly impaired the recognition of fearful facial expressions in female, but not male, volunteers. This was specific since recognition of other basic emotions was comparable in the two groups. There was also no effect of tryptophan depletion on the classification of famous faces or on subjective state ratings of mood or anxiety. These results confirm a role for serotonin in the processing of fear related cues, and in line with previous findings also suggest greater effects of tryptophan depletion in female volunteers. Although acute tryptophan depletion does not typically affect mood in healthy subjects, the present results suggest that subtle changes in the processing of emotional material may occur with this manipulation of serotonin function.
Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition
Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans
2015-01-01
The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602
Word Recognition Processing Efficiency as a Component of Second Language Listening
ERIC Educational Resources Information Center
Joyce, Paul
2013-01-01
This study investigated the application of the speeded lexical decision task to L2 aural processing efficiency. One-hundred and twenty Japanese university students completed an aural word/nonword task. When the variation of lexical decision time (CV) was correlated with reaction time (RT), the results suggested that the single-word recognition…
Escalating dose, multiple binge methamphetamine regimen does not impair recognition memory in rats.
Clark, Robert E; Kuczenski, Ronald; Segal, David S
2007-07-01
Rats exposed to methamphetamine (METH) in an acute high dose "binge" pattern have been reported to exhibit a persistent deficit in a novel object recognition (NOR) task, which may suggest a potential risk for human METH abusers. However, most high dose METH abusers initially use lower doses before progressively increasing the dose, only eventually engaging in multiple daily administrations. To simulate this pattern of METH exposure, we administered progressively increasing doses of METH to rats over a 14 day interval, then treated them with daily METH binges for 11 days. This treatment resulted in a persistent deficit in striatal dopamine (DA) levels of approximately 20%. We then tested them in a NOR task under a variety of conditions. We could not detect a deficit in their performance in the NOR task under any of the testing conditions. These results suggest that mechanisms other than or additional to the decrement in striatal DA associated with an acute METH binge are responsible for the deficit in the NOR task, and that neuroadaptations consequential to prolonged escalating dose METH pretreatment mitigate against these mechanisms.
Ehrlé, Nathalie; Henry, Audrey; Pesa, Audrey; Bakchine, Serge
2011-03-01
This paper presents a French battery designed to assess emotional and sociocognitive abilities in neurological patients in clinical practice. The first part of this battery includes subtests assessing emotions: a recognition task of primary facial emotions, a discrimination task of facial emotions, a task of expressive intensity judgment, a task of gender identification, a recognition task of musical emotions. The second part intends to assess some sociocognitive abilities, that is mainly theory of mind (attribution tasks of mental states to others: false believe tasks of first and second order, faux-pas task) and social norms (moral/conventional distinction task, social situations task) but also abstract language and humour. We present a general description of the battery with special attention to specific methodological constraints for the assessment of neurological patients. After a brief introduction to moral and conventional judgments (definition and current theoretical basis), the French version of the social norm task from RJR Blair (Blair and Cipolotti, 2000) is developed. The relevance of these tasks in frontal variant of frontotemporal dementia (fvFTD is illustrated by the report of the results of a study conducted in 18 patients by the Cambridge group and by the personal study of a patient with early stage of vfFTD. The relevance of the diagnostic of sociocognitive impairment in neurological patients is discussed.
Huff, Mark J; Yates, Tyler J; Balota, David A
2018-05-03
Recently, we have shown that two types of initial testing (recall of a list or guessing of critical items repeated over 12 study/test cycles) improved final recognition of related and unrelated word lists relative to restudy. These benefits were eliminated, however, when test instructions were manipulated within subjects and presented after study of each list, procedures designed to minimise expectancy of a specific type of upcoming test [Huff, Balota, & Hutchison, 2016. The costs and benefits of testing and guessing on recognition memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 1559-1572. doi: 10.1037/xlm0000269 ], suggesting that testing and guessing effects may be influenced by encoding strategies specific for the type of upcoming task. We follow-up these experiments by examining test-expectancy processes in guessing and testing. Testing and guessing benefits over restudy were not found when test instructions were presented either after (Experiment 1) or before (Experiment 2) a single study/task cycle was completed, nor were benefits found when instructions were presented before study/task cycles and the task was repeated three times (Experiment 3). Testing and guessing benefits emerged only when instructions were presented before a study/task cycle and the task was repeated six times (Experiments 4A and 4B). These experiments demonstrate that initial testing and guessing can produce memory benefits in recognition, but only following substantial task repetitions which likely promote task-expectancy processes.
Entity recognition in the biomedical domain using a hybrid approach.
Basaldella, Marco; Furrer, Lenz; Tasso, Carlo; Rinaldi, Fabio
2017-11-09
This article describes a high-recall, high-precision approach for the extraction of biomedical entities from scientific articles. The approach uses a two-stage pipeline, combining a dictionary-based entity recognizer with a machine-learning classifier. First, the OGER entity recognizer, which has a bias towards high recall, annotates the terms that appear in selected domain ontologies. Subsequently, the Distiller framework uses this information as a feature for a machine learning algorithm to select the relevant entities only. For this step, we compare two different supervised machine-learning algorithms: Conditional Random Fields and Neural Networks. In an in-domain evaluation using the CRAFT corpus, we test the performance of the combined systems when recognizing chemicals, cell types, cellular components, biological processes, molecular functions, organisms, proteins, and biological sequences. Our best system combines dictionary-based candidate generation with Neural-Network-based filtering. It achieves an overall precision of 86% at a recall of 60% on the named entity recognition task, and a precision of 51% at a recall of 49% on the concept recognition task. These results are to our knowledge the best reported so far in this particular task.
Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning
Yee, Meagan; Jones, Susan S.; Smith, Linda B.
2012-01-01
Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015
Morosan, Larisa; Badoud, Deborah; Zaharia, Alexandra; Brosch, Tobias; Eliez, Stephan; Bateman, Anthony; Heller, Patrick; Debbané, Martin
2017-01-01
Background Previous research suggests that antisocial individuals present impairment in social cognitive processing, more specifically in emotion recognition (ER) and perspective taking (PT). The first aim of the present study was to investigate the recognition of a wide range of emotional expressions and visual PT capacities in a group of incarcerated male adolescents in comparison to a matched group of community adolescents. Secondly, we sought to explore the relationship between these two mechanisms in relation to psychopathic traits. Methods Forty-five male adolescents (22 incarcerated adolescents (Mage = 16.52, SD = 0.96) and 23 community adolescents (Mage = 16.43, SD = 1.41)) participated in the study. ER abilities were measured using a dynamic and multimodal task that requires the participants to watch short videos in which trained actors express 14 emotions. PT capacities were examined using a task recognized and proven to be sensitive to adolescent development, where participants had to follow the directions of another person whilst taking into consideration his perspective. Results We found a main effect of group on emotion recognition scores. In comparison to the community adolescents, the incarcerated adolescents presented lower recognition of three emotions: interest, anxiety and amusement. Analyses also revealed significant impairments in PT capacities in incarcerated adolescents. In addition, incarcerated adolescents’ PT scores were uniquely correlated to their scores on recognition of interest. Conclusions The results corroborate previously reported impairments in ER and PT capacities, in the incarcerated adolescents. The study also indicates an association between impairments in the recognition of interest and impairments in PT. PMID:28122048
Feedforward object-vision models only tolerate small image variations compared to human
Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi
2014-01-01
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986
Repetition Suppression and Reactivation in Auditory–Verbal Short-Term Recognition Memory
D'Esposito, Mark
2009-01-01
The neural response to stimulus repetition is not uniform across brain regions, stimulus modalities, or task contexts. For instance, it has been observed in many functional magnetic resonance imaging (fMRI) studies that sometimes stimulus repetition leads to a relative reduction in neural activity (repetition suppression), whereas in other cases repetition results in a relative increase in activity (repetition enhancement). In the present study, we hypothesized that in the context of a verbal short-term recognition memory task, repetition-related “increases” should be observed in the same posterior temporal regions that have been previously associated with “persistent activity” in working memory rehearsal paradigms. We used fMRI and a continuous recognition memory paradigm with short lags to examine repetition effects in the posterior and anterior regions of the superior temporal cortex. Results showed that, consistent with our hypothesis, the 2 posterior temporal regions consistently associated with working memory maintenance, also show repetition increases during short-term recognition memory. In contrast, a region in the anterior superior temporal lobe showed repetition suppression effects, consistent with previous research work on perceptual adaptation in the auditory–verbal domain. We interpret these results in light of recent theories of the functional specialization along the anterior and posterior axes of the superior temporal lobe. PMID:18987393
Repetition suppression and reactivation in auditory-verbal short-term recognition memory.
Buchsbaum, Bradley R; D'Esposito, Mark
2009-06-01
The neural response to stimulus repetition is not uniform across brain regions, stimulus modalities, or task contexts. For instance, it has been observed in many functional magnetic resonance imaging (fMRI) studies that sometimes stimulus repetition leads to a relative reduction in neural activity (repetition suppression), whereas in other cases repetition results in a relative increase in activity (repetition enhancement). In the present study, we hypothesized that in the context of a verbal short-term recognition memory task, repetition-related "increases" should be observed in the same posterior temporal regions that have been previously associated with "persistent activity" in working memory rehearsal paradigms. We used fMRI and a continuous recognition memory paradigm with short lags to examine repetition effects in the posterior and anterior regions of the superior temporal cortex. Results showed that, consistent with our hypothesis, the 2 posterior temporal regions consistently associated with working memory maintenance, also show repetition increases during short-term recognition memory. In contrast, a region in the anterior superior temporal lobe showed repetition suppression effects, consistent with previous research work on perceptual adaptation in the auditory-verbal domain. We interpret these results in light of recent theories of the functional specialization along the anterior and posterior axes of the superior temporal lobe.
ERIC Educational Resources Information Center
Janning, Ruth; Schatten, Carlotta; Schmidt-Thieme, Lars
2016-01-01
Recognising students' emotion, affect or cognition is a relatively young field and still a challenging task in the area of intelligent tutoring systems. There are several ways to use the output of these recognition tasks within the system. The approach most often mentioned in the literature is using it for giving feedback to the students. The…
Soravia, Leila M; Witmer, Joëlle S; Schwab, Simon; Nakataki, Masahito; Dierks, Thomas; Wiest, Roland; Henke, Katharina; Federspiel, Andrea; Jann, Kay
2016-03-01
Low self-referential thoughts are associated with better concentration, which leads to deeper encoding and increases learning and subsequent retrieval. There is evidence that being engaged in externally rather than internally focused tasks is related to low neural activity in the default mode network (DMN) promoting open mind and the deep elaboration of new information. Thus, reduced DMN activity should lead to enhanced concentration, comprehensive stimulus evaluation including emotional categorization, deeper stimulus processing, and better long-term retention over one whole week. In this fMRI study, we investigated brain activation preceding and during incidental encoding of emotional pictures and on subsequent recognition performance. During fMRI, 24 subjects were exposed to 80 pictures of different emotional valence and subsequently asked to complete an online recognition task one week later. Results indicate that neural activity within the medial temporal lobes during encoding predicts subsequent memory performance. Moreover, a low activity of the default mode network preceding incidental encoding leads to slightly better recognition performance independent of the emotional perception of a picture. The findings indicate that the suppression of internally-oriented thoughts leads to a more comprehensive and thorough evaluation of a stimulus and its emotional valence. Reduced activation of the DMN prior to stimulus onset is associated with deeper encoding and enhanced consolidation and retrieval performance even one week later. Even small prestimulus lapses of attention influence consolidation and subsequent recognition performance. © 2015 Wiley Periodicals, Inc.
Goal-seeking neural net for recall and recognition
NASA Astrophysics Data System (ADS)
Omidvar, Omid M.
1990-07-01
Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Facial recognition deficits as a potential endophenotype in bipolar disorder.
Vierck, Esther; Porter, Richard J; Joyce, Peter R
2015-11-30
Bipolar disorder (BD) is considered a highly heritable and genetically complex disorder. Several cognitive functions, such as executive functions and verbal memory have been suggested as promising candidates for endophenotypes. Although there is evidence for deficits in facial emotion recognition in individuals with BD, studies investigating these functions as endophenotypes are rare. The current study investigates emotion recognition as a potential endophenotype in BD by comparing 36 BD participants, 24 of their 1st degree relatives and 40 healthy control participants in a computerised facial emotion recognition task. Group differences were evaluated using repeated measurement analysis of co-variance with age as a covariate. Results revealed slowed emotion recognition for both BD and their relatives. Furthermore, BD participants were less accurate than healthy controls in their recognition of emotion expressions. We found no evidence of emotion specific differences between groups. Our results provide evidence for facial recognition as a potential endophenotype in BD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Literature review of voice recognition and generation technology for Army helicopter applications
NASA Astrophysics Data System (ADS)
Christ, K. A.
1984-08-01
This report is a literature review on the topics of voice recognition and generation. Areas covered are: manual versus vocal data input, vocabulary, stress and workload, noise, protective masks, feedback, and voice warning systems. Results of the studies presented in this report indicate that voice data entry has less of an impact on a pilot's flight performance, during low-level flying and other difficult missions, than manual data entry. However, the stress resulting from such missions may cause the pilot's voice to change, reducing the recognition accuracy of the system. The noise present in helicopter cockpits also causes the recognition accuracy to decrease. Noise-cancelling devices are being developed and improved upon to increase the recognition performance in noisy environments. Future research in the fields of voice recognition and generation should be conducted in the areas of stress and workload, vocabulary, and the types of voice generation best suited for the helicopter cockpit. Also, specific tasks should be studied to determine whether voice recognition and generation can be effectively applied.
Veselis, Robert A; Pryor, Kane O; Reinsel, Ruth A; Li, Yuelin; Mehta, Meghana; Johnson, Ray
2009-02-01
Intravenous drugs active via gamma-aminobutyric acid receptors to produce memory impairment during conscious sedation. Memory function was assessed using event-related potentials (ERPs) while drug was present. The continuous recognition task measured recognition of photographs from working (6 s) and long-term (27 s) memory while ERPs were recorded from Cz (familiarity recognition) and Pz electrodes (recollection recognition). Volunteer participants received sequential doses of one of placebo (n = 11), 0.45 and 0.9 microg/ml propofol (n = 10), 20 and 40 ng/ml midazolam (n = 12), 1.5 and 3 microg/ml thiopental (n = 11), or 0.25 and 0.4 ng/ml dexmedetomidine (n = 11). End-of-day yes/no recognition 225 min after the end of drug infusion tested memory retention of pictures encoded on the continuous recognition tasks. Active drugs increased reaction times and impaired memory on the continuous recognition task equally, except for a greater effect of midazolam (P < 0.04). Forgetting from continuous recognition tasks to end of day was similar for all drugs (P = 0.40), greater than placebo (P < 0.001). Propofol and midazolam decreased the area between first presentation (new) and recognized (old, 27 s later) ERP waveforms from long-term memory for familiarity (P = 0.03) and possibly for recollection processes (P = 0.12). Propofol shifted ERP amplitudes to smaller voltages (P < 0.002). Dexmedetomidine may have impaired familiarity more than recollection processes (P = 0.10). Thiopental had no effect on ERPs. Propofol and midazolam impaired recognition ERPs from long-term memory but not working memory. ERP measures of memory revealed different pathways to end-of-day memory loss as early as 27 s after encoding.
A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera.
Ar, Ilktan; Akgul, Yusuf Sinan
2014-11-01
Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained.
Ahmad, Fahad N; Hockley, William E
2017-09-01
We examined whether processing fluency contributes to associative recognition of unitized pre-experimental associations. In Experiments 1A and 1B, we minimized perceptual fluency by presenting each word of pairs on separate screens at both study and test, yet the compound word (CW) effect (i.e., hit and false-alarm rates greater for CW pairs with no difference in discrimination) did not reduce. In Experiments 2A and 2B, conceptual fluency was examined by comparing transparent (e.g., hand bag) and opaque (e.g., rag time) CW pairs in lexical decision and associative recognition tasks. Lexical decision was faster for transparent CWs (Experiment 2A) but in associative recognition, the CW effect did not differ by CW pair type (Experiment 2B). In Experiments 3A and 3B, we examined whether priming that increases processing fluency would influence the CW effect. In Experiment 3A, CW and non-compound word pairs were preceded with matched and mismatched primes at test in an associative recognition task. In Experiment 3B, only transparent and opaque CW pairs were presented. Results showed that presenting matched versus mismatched primes at test did not influence the CW effect. The CW effect in yes-no associative recognition is due to reliance on enhanced familiarity of unitized CW pairs.
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-01-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-07-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
A model of traffic signs recognition with convolutional neural network
NASA Astrophysics Data System (ADS)
Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing
2016-10-01
In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.
ERIC Educational Resources Information Center
Unsworth, Nash; Brewer, Gene A.
2009-01-01
The authors of the current study examined the relationships among item-recognition, source-recognition, free recall, and other memory and cognitive ability tasks via an individual differences analysis. Two independent sources of variance contributed to item-recognition and source-recognition performance, and these two constructs related…
Facial emotion recognition in patients with focal and diffuse axonal injury.
Yassin, Walid; Callahan, Brandy L; Ubukata, Shiho; Sugihara, Genichi; Murai, Toshiya; Ueda, Keita
2017-01-01
Facial emotion recognition impairment has been well documented in patients with traumatic brain injury. Studies exploring the neural substrates involved in such deficits have implicated specific grey matter structures (e.g. orbitofrontal regions), as well as diffuse white matter damage. Our study aims to clarify whether different types of injuries (i.e. focal vs. diffuse) will lead to different types of impairments on facial emotion recognition tasks, as no study has directly compared these patients. The present study examined performance and response patterns on a facial emotion recognition task in 14 participants with diffuse axonal injury (DAI), 14 with focal injury (FI) and 22 healthy controls. We found that, overall, participants with FI and DAI performed more poorly than controls on the facial emotion recognition task. Further, we observed comparable emotion recognition performance in participants with FI and DAI, despite differences in the nature and distribution of their lesions. However, the rating response pattern between the patient groups was different. This is the first study to show that pure DAI, without gross focal lesions, can independently lead to facial emotion recognition deficits and that rating patterns differ depending on the type and location of trauma.
Crowd Sourcing Data Collection through Amazon Mechanical Turk
2013-09-01
The first recognition study consisted of a Panel Study using a simple detection protocol, in which participants were presented with vignettes and, for...variability than the crowdsourcing data set, hewing more closely to the year 1 verbs of interest and simple description grammar . The DT:PS data were...Study RT: PS Recognition Task: Panel Study RT: RT Recognition Task: Round Table S3 Amazon Simple Storage Service SVPA Single Verb Present /Absent
Superordinate Level Processing Has Priority Over Basic-Level Processing in Scene Gist Recognition
Sun, Qi; Zheng, Yang; Sun, Mingxia; Zheng, Yuanjie
2016-01-01
By combining a perceptual discrimination task and a visuospatial working memory task, the present study examined the effects of visuospatial working memory load on the hierarchical processing of scene gist. In the perceptual discrimination task, two scene images from the same (manmade–manmade pairing or natural–natural pairing) or different superordinate level categories (manmade–natural pairing) were presented simultaneously, and participants were asked to judge whether these two images belonged to the same basic-level category (e.g., street–street pairing) or not (e.g., street–highway pairing). In the concurrent working memory task, spatial load (position-based load in Experiment 1) and object load (figure-based load in Experiment 2) were manipulated. The results were as follows: (a) spatial load and object load have stronger effects on discrimination of same basic-level scene pairing than same superordinate level scene pairing; (b) spatial load has a larger impact on the discrimination of scene pairings at early stages than at later stages; on the contrary, object information has a larger influence on at later stages than at early stages. It followed that superordinate level processing has priority over basic-level processing in scene gist recognition and spatial information contributes to the earlier and object information to the later stages in scene gist recognition. PMID:28382195
Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup
Tagai, Keiko; Ohtaka, Hitomi; Nittono, Hiroshi
2016-01-01
Many women wear facial makeup to accentuate their appeal and attractiveness. Makeup may vary from natural (light) to glamorous (heavy), depending of the context of interpersonal situations, an emphasis on femininity, and current societal makeup trends. This study examined how light makeup and heavy makeup influenced attractiveness ratings and facial recognition. In a rating task, 38 Japanese women assigned attractiveness ratings to 36 Japanese female faces with no makeup, light makeup, and heavy makeup (12 each). In a subsequent recognition task, the participants were presented with 36 old and 36 new faces. Results indicated that attractiveness was rated highest for the light makeup faces and lowest for the no makeup faces. In contrast, recognition performance was higher for the no makeup and light make up faces than for the heavy makeup faces. Faces with heavy makeup produced a higher rate of false recognition than did other faces, possibly because heavy makeup creates an impression of the style of makeup itself, rather than the individual wearing the makeup. The present study suggests that light makeup is preferable to heavy makeup in that light makeup does not interfere with individual recognition and gives beholders positive impressions. PMID:26973553
Faces with Light Makeup Are Better Recognized than Faces with Heavy Makeup.
Tagai, Keiko; Ohtaka, Hitomi; Nittono, Hiroshi
2016-01-01
Many women wear facial makeup to accentuate their appeal and attractiveness. Makeup may vary from natural (light) to glamorous (heavy), depending of the context of interpersonal situations, an emphasis on femininity, and current societal makeup trends. This study examined how light makeup and heavy makeup influenced attractiveness ratings and facial recognition. In a rating task, 38 Japanese women assigned attractiveness ratings to 36 Japanese female faces with no makeup, light makeup, and heavy makeup (12 each). In a subsequent recognition task, the participants were presented with 36 old and 36 new faces. Results indicated that attractiveness was rated highest for the light makeup faces and lowest for the no makeup faces. In contrast, recognition performance was higher for the no makeup and light make up faces than for the heavy makeup faces. Faces with heavy makeup produced a higher rate of false recognition than did other faces, possibly because heavy makeup creates an impression of the style of makeup itself, rather than the individual wearing the makeup. The present study suggests that light makeup is preferable to heavy makeup in that light makeup does not interfere with individual recognition and gives beholders positive impressions.
Two Speed Factors of Visual Recognition Independently Correlated with Fluid Intelligence
Tachibana, Ryosuke; Namba, Yuri; Noguchi, Yasuki
2014-01-01
Growing evidence indicates a moderate but significant relationship between processing speed in visuo-cognitive tasks and general intelligence. On the other hand, findings from neuroscience proposed that the primate visual system consists of two major pathways, the ventral pathway for objects recognition and the dorsal pathway for spatial processing and attentive analysis. Previous studies seeking for visuo-cognitive factors of human intelligence indicated a significant correlation between fluid intelligence and the inspection time (IT), an index for a speed of object recognition performed in the ventral pathway. We thus presently examined a possibility that neural processing speed in the dorsal pathway also represented a factor of intelligence. Specifically, we used the mental rotation (MR) task, a popular psychometric measure for mental speed of spatial processing in the dorsal pathway. We found that the speed of MR was significantly correlated with intelligence scores, while it had no correlation with one’s IT (recognition speed of visual objects). Our results support the new possibility that intelligence could be explained by two types of mental speed, one related to object recognition (IT) and another for manipulation of mental images (MR). PMID:24825574
Crookes, Kate; Robbins, Rachel A
2014-10-01
Performance on laboratory face tasks improves across childhood, not reaching adult levels until adolescence. Debate surrounds the source of this development, with recent reviews suggesting that underlying face processing mechanisms are mature early in childhood and that the improvement seen on experimental tasks instead results from general cognitive/perceptual development. One face processing mechanism that has been argued to develop slowly is the ability to encode faces in a view-invariant manner (i.e., allowing recognition across changes in viewpoint). However, many previous studies have not controlled for general cognitive factors. In the current study, 8-year-olds and adults performed a recognition memory task with two study-test viewpoint conditions: same view (study front view, test front view) and change view (study front view, test three-quarter view). To allow quantitative comparison between children and adults, performance in the same view condition was matched across the groups by increasing the learning set size for adults. Results showed poorer memory in the change view condition than in the same view condition for both adults and children. Importantly, there was no quantitative difference between children and adults in the size of decrement in memory performance resulting from a change in viewpoint. This finding adds to growing evidence that face processing mechanisms are mature early in childhood. Copyright © 2014 Elsevier Inc. All rights reserved.
Pal, Reshmi; Mendelson, John; Clavier, Odile; Baggott, Mathew J; Coyle, Jeremy; Galloway, Gantt P
2016-01-01
In methamphetamine (MA) users, drug-induced neurocognitive deficits may help to determine treatment, monitor adherence, and predict relapse. To measure these relationships, we developed an iPhone app (Neurophone) to compare lab and field performance of N-Back, Stop Signal, and Stroop tasks that are sensitive to MA-induced deficits. Twenty healthy controls and 16 MA-dependent participants performed the tasks in-lab using a validated computerized platform and the Neurophone before taking the latter home and performing the tasks twice daily for two weeks. N-Back task: there were no clear differences in performance between computer-based vs. phone-based in-lab tests and phone-based in-lab vs. phone-based in-field tests. Stop-Signal task: difference in parameters prevented comparison of computer-based and phone-based versions. There was significant difference in phone performance between field and lab. Stroop task: response time measured by the speech recognition engine lacked precision to yield quantifiable results. There was no learning effect over time. On an average, each participant completed 84.3% of the in-field NBack tasks and 90.4% of the in-field Stop Signal tasks (MA-dependent participants: 74.8% and 84.3%; healthy controls: 91.4% and 95.0%, respectively). Participants rated Neurophone easy to use. Cognitive tasks performed in-field using Neurophone have the potential to yield results comparable to those obtained in a laboratory setting. Tasks need to be modified for use as the app's voice recognition system is not yet adequate for timed tests.
Comparing the Frequency Effect Between the Lexical Decision and Naming Tasks in Chinese
Wu, Jei-Tun
2016-01-01
In psycholinguistic research, the frequency effect can be one of the indicators for eligible experimental tasks that examine the nature of lexical access. Usually, only one of those tasks is chosen to examine lexical access in a study. Using two exemplar experiments, this paper introduces an approach to include both the lexical decision task and the naming task in a study. In the first experiment, the stimuli were Chinese characters with frequency and regularity manipulated. In the second experiment, the stimuli were switched to Chinese two-character words, in which the word frequency and the regularity of the leading character were manipulated. The logic of these two exemplar experiments was to explore some important issues such as the role of phonology on recognition by comparing the frequency effect between both the tasks. The results revealed different patterns of lexical access from those reported in the alphabetic systems. The results of Experiment 1 manifested a larger frequency effect in the naming task as compared to the LDT, when the stimuli were Chinese characters. And it is noteworthy that, in Experiment 1, when the stimuli were regular Chinese characters, the frequency effect observed in the naming task was roughly equivalent to that in the LDT. However, a smaller frequency effect was shown in the naming task as compared to the LDT, when the stimuli were switched to Chinese two-character words in Experiment 2. Taking advantage of the respective demands and characteristics in both tasks, researchers can obtain a more complete and precise picture of character/word recognition. PMID:27077703
Bastin, Christine; Van der Linden, Martial
2003-01-01
Whether the format of a recognition memory task influences the contribution of recollection and familiarity to performance is a matter of debate. The authors investigated this issue by comparing the performance of 64 young (mean age = 21.7 years; mean education = 14.5 years) and 62 older participants (mean age = 64.4 years; mean education = 14.2 years) on a yes-no and a forced-choice recognition task for unfamiliar faces using the remember-know-guess procedure. Familiarity contributed more to forced-choice than to yes-no performance. Moreover, older participants, who showed a decrease in recollection together with an increase in familiarity, performed better on the forced-choice task than on the yes-no task, whereas younger participants showed the opposite pattern.
[The effects of normal aging on face naming and recognition of famous people: battery 75].
Pluchon, C; Simonnet, E; Toullat, G; Gil, R
2002-07-01
The difficulty to recall proper nouns is often something elderly people complain about. Thus, we tried to build and standardize a tool that could allow a quantified estimation of the naming and recognition abilities about famous people faces, specifying the part of gender, age and cultural level for each kind of test. The performances of 542 subjects divided in 3 age brackets and 3 academic knowledge levels were analysed. To carry out the test material, the artistic team of the Grevin Museum (Paris) was called upon. Their work offers a homogeneous way to shape famous people faces. One same person thus photographed 75 characters from different social categories with the same conditions of light, during only one day. The results of the study show that men perform better than women as concerns naming task, but that there's no difference between genders as concerns recognition task. Recognition performances are significantly better whatever the age, the gender and the cultural level may be. Generally, performances are all the more better since subjects are younger and have a higher cultural level. Our study then confirms the fact that normal aging goes hand in hand with rising difficulties to name faces. Moreover, results tend to show that recognition of faces remains better preserved and that the greater disability to recall a name is linked to difficulties in lexical accessing.
Variability sensitivity of dynamic texture based recognition in clinical CT data
NASA Astrophysics Data System (ADS)
Kwitt, Roland; Razzaque, Sharif; Lowell, Jeffrey; Aylward, Stephen
2014-03-01
Dynamic texture recognition using a database of template models has recently shown promising results for the task of localizing anatomical structures in Ultrasound video. In order to understand its clinical value, it is imperative to study the sensitivity with respect to inter-patient variability as well as sensitivity to acquisition parameters such as Ultrasound probe angle. Fully addressing patient and acquisition variability issues, however, would require a large database of clinical Ultrasound from many patients, acquired in a multitude of controlled conditions, e.g., using a tracked transducer. Since such data is not readily attainable, we advocate an alternative evaluation strategy using abdominal CT data as a surrogate. In this paper, we describe how to replicate Ultrasound variabilities by extracting subvolumes from CT and interpreting the image material as an ordered sequence of video frames. Utilizing this technique, and based on a database of abdominal CT from 45 patients, we report recognition results on an organ (kidney) recognition task, where we try to discriminate kidney subvolumes/videos from a collection of randomly sampled negative instances. We demonstrate that (1) dynamic texture recognition is relatively insensitive to inter-patient variation while (2) viewing angle variability needs to be accounted for in the template database. Since naively extending the template database to counteract variability issues can lead to impractical database sizes, we propose an alternative strategy based on automated identification of a small set of representative models.
Macbeth, Abbe H.; Edds, Jennifer Stepp; Young, W. Scott
2010-01-01
Social recognition (SR) enables rodents to distinguish between familiar and novel conspecifics, largely through individual odor cues. SR tasks utilize the tendency for a male to sniff and interact with a novel individual more than a familiar individual. Many paradigms have been used to study the roles of the neuropeptides oxytocin and vasopressin in SR. However, inconsistencies in results have arisen within similar mouse strains, and across different paradigms and laboratories, making reliable testing of social recognition difficult. The current protocol details a novel approach that is replicable across investigators and in different strains of mice. We created a protocol that utilizes gonadally intact, singly housed females presented within corrals to group-housed males. Housing females singly prior to testing is particularly important for reliable discrimination. This methodology will be useful for studying short-term social memory in rodents, and may also be applicable for longer-term studies. PMID:19816420
Exploring the association between visual perception abilities and reading of musical notation.
Lee, Horng-Yih
2012-06-01
In the reading of music, the acquisition of pitch information depends primarily upon the spatial position of notes as well as upon an individual's spatial processing ability. This study investigated the relationship between the ability to read single notes and visual-spatial ability. Participants with high and low single-note reading abilities were differentiated based upon differences in musical notation-reading abilities and their spatial processing; object recognition abilities were then assessed. It was found that the group with lower note-reading abilities made more errors than did the group with a higher note-reading abilities in the mental rotation task. In contrast, there was no apparent significant difference between the two groups in the object recognition task. These results suggest that note-reading may be related to visual spatial processing abilities, and not to an individual's ability with object recognition.
Morgenthaler, Jarste; Wiesner, Christian D; Hinze, Karoline; Abels, Lena C; Prehn-Kristensen, Alexander; Göder, Robert
2014-01-01
Sleep enhances memory consolidation and it has been hypothesized that rapid eye movement (REM) sleep in particular facilitates the consolidation of emotional memory. The aim of this study was to investigate this hypothesis using selective REM-sleep deprivation. We used a recognition memory task in which participants were shown negative and neutral pictures. Participants (N=29 healthy medical students) were separated into two groups (undisturbed sleep and selective REM-sleep deprived). Both groups also worked on the memory task in a wake condition. Recognition accuracy was significantly better for negative than for neutral stimuli and better after the sleep than the wake condition. There was, however, no difference in the recognition accuracy (neutral and emotional) between the groups. In summary, our data suggest that REM-sleep deprivation was successful and that the resulting reduction of REM-sleep had no influence on memory consolidation whatsoever.
Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D
2016-01-27
Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.
Validation of a short-term memory test for the recognition of people and faces.
Leyk, D; Sievert, A; Heiss, A; Gorges, W; Ridder, D; Alexander, T; Wunderlich, M; Ruther, T
2008-08-01
Memorising and processing faces is a short-term memory dependent task of utmost importance in the security domain, in which constant and high performance is a must. Especially in access or passport control-related tasks, the timely identification of performance decrements is essential, margins of error are narrow and inadequate performance may have grave consequences. However, conventional short-term memory tests frequently use abstract settings with little relevance to working situations. They may thus be unable to capture task-specific decrements. The aim of the study was to devise and validate a new test, better reflecting job specifics and employing appropriate stimuli. After 1.5 s (short) or 4.5 s (long) presentation, a set of seven portraits of faces had to be memorised for comparison with two control stimuli. Stimulus appearance followed 2 s (first item) and 8 s (second item) after set presentation. Twenty eight subjects (12 male, 16 female) were tested at seven different times of day, 3 h apart. Recognition rates were above 60% even for the least favourable condition. Recognition was significantly better in the 'long' condition (+10%) and for the first item (+18%). Recognition time showed significant differences (10%) between items. Minor effects of learning were found for response latencies only. Based on occupationally relevant metrics, the test displayed internal and external validity, consistency and suitability for further use in test/retest scenarios. In public security, especially where access to restricted areas is monitored, margins of error are narrow and operator performance must remain high and level. Appropriate schedules for personnel, based on valid test results, are required. However, task-specific data and performance tests, permitting the description of task specific decrements, are not available. Commonly used tests may be unsuitable due to undue abstraction and insufficient reference to real-world conditions. Thus, tests are required that account for task-specific conditions and neurophysiological characteristics.
Lozano-Diez, Alicia; Zazo, Ruben; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin
2017-01-01
Language recognition systems based on bottleneck features have recently become the state-of-the-art in this research field, showing its success in the last Language Recognition Evaluation (LRE 2015) organized by NIST (U.S. National Institute of Standards and Technology). This type of system is based on a deep neural network (DNN) trained to discriminate between phonetic units, i.e. trained for the task of automatic speech recognition (ASR). This DNN aims to compress information in one of its layers, known as bottleneck (BN) layer, which is used to obtain a new frame representation of the audio signal. This representation has been proven to be useful for the task of language identification (LID). Thus, bottleneck features are used as input to the language recognition system, instead of a classical parameterization of the signal based on cepstral feature vectors such as MFCCs (Mel Frequency Cepstral Coefficients). Despite the success of this approach in language recognition, there is a lack of studies analyzing in a systematic way how the topology of the DNN influences the performance of bottleneck feature-based language recognition systems. In this work, we try to fill-in this gap, analyzing language recognition results with different topologies for the DNN used to extract the bottleneck features, comparing them and against a reference system based on a more classical cepstral representation of the input signal with a total variability model. This way, we obtain useful knowledge about how the DNN configuration influences bottleneck feature-based language recognition systems performance.
Bennetts, Rachel J; Mole, Joseph; Bate, Sarah
2017-09-01
Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.
Berggren, Nick; Richards, Anne; Taylor, Joseph; Derakshan, Nazanin
2013-01-01
Trait anxiety is associated with deficits in attentional control, particularly in the ability to inhibit prepotent responses. Here, we investigated this effect while varying the level of cognitive load in a modified antisaccade task that employed emotional facial expressions (neutral, happy, and angry) as targets. Load was manipulated using a secondary auditory task requiring recognition of tones (low load), or recognition of specific tone pitch (high load). Results showed that load increased antisaccade latencies on trials where gaze toward face stimuli should be inhibited. This effect was exacerbated for high anxious individuals. Emotional expression also modulated task performance on antisaccade trials for both high and low anxious participants under low cognitive load, but did not influence performance under high load. Collectively, results (1) suggest that individuals reporting high levels of anxiety are particularly vulnerable to the effects of cognitive load on inhibition, and (2) support recent evidence that loading cognitive processes can reduce emotional influences on attention and cognition. PMID:23717273
Herzmann, Grit
2016-07-01
The N250 and N250r (r for repetition, signaling a difference measure of priming) has been proposed to reflect the activation of perceptual memory representations for individual faces. Increased N250r and N250 amplitudes have been associated with higher levels of familiarity and expertise, respectively. In contrast to these observations, the N250 amplitude has been found to be larger for other-race than own-race faces in recognition memory tasks. This study investigated if these findings were due to increased identity-specific processing demands for other-race relative to own-race faces and whether or not similar results would be obtained for the N250 in a repetition priming paradigm. Only Caucasian participants were available for testing and completed two tasks with Caucasian, African-American, and Chinese faces. In a repetition priming task, participants decided whether or not sequentially presented faces were of the same identity (individuation task) or same race (categorization task). Increased N250 amplitudes were found for African-American and Chinese faces relative to Caucasian faces, replicating previous results in recognition memory tasks. Contrary to the expectation that increased N250 amplitudes for other-race face would be confined to the individuation task, both tasks showed similar results. This could be due to the fact that face identity information needed to be maintained across the sequential presentation of prime and target in both tasks. Increased N250 amplitudes for other-race faces are taken to represent increased neural demands on the identity-specific processing of other-race faces, which are typically processed less holistically and less on the level of the individual. Copyright © 2016 Elsevier B.V. All rights reserved.
O'Connor, Akira R; Moulin, Chris J A
2013-01-01
Recent neuropsychological and neuroscientific research suggests that people who experience more déjà vu display characteristic patterns in normal recognition memory. We conducted a large individual differences study (n = 206) to test these predictions using recollection and familiarity parameters recovered from a standard memory task. Participants reported déjà vu frequency and a number of its correlates, and completed a recognition memory task analogous to a Remember-Know procedure. The individual difference measures replicated an established correlation between déjà vu frequency and frequency of travel, and recognition performance showed well-established word frequency and accuracy effects. Contrary to predictions, no relationships were found between déjà vu frequency and recollection or familiarity memory parameters from the recognition test. We suggest that déjà vu in the healthy population reflects a mismatch between errant memory signaling and memory monitoring processes not easily characterized by standard recognition memory task performance.
O’Connor, Akira R.; Moulin, Chris J. A.
2013-01-01
Recent neuropsychological and neuroscientific research suggests that people who experience more déjà vu display characteristic patterns in normal recognition memory. We conducted a large individual differences study (n = 206) to test these predictions using recollection and familiarity parameters recovered from a standard memory task. Participants reported déjà vu frequency and a number of its correlates, and completed a recognition memory task analogous to a Remember-Know procedure. The individual difference measures replicated an established correlation between déjà vu frequency and frequency of travel, and recognition performance showed well-established word frequency and accuracy effects. Contrary to predictions, no relationships were found between déjà vu frequency and recollection or familiarity memory parameters from the recognition test. We suggest that déjà vu in the healthy population reflects a mismatch between errant memory signaling and memory monitoring processes not easily characterized by standard recognition memory task performance. PMID:24409159
Gilet, Estelle; Diard, Julien; Bessière, Pierre
2011-01-01
In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043
Lind, Sophie E; Bowler, Dermot M
2009-09-01
This study investigated semantic and episodic memory in autism spectrum disorder (ASD), using a task which assessed recognition and self-other source memory. Children with ASD showed undiminished recognition memory but significantly diminished source memory, relative to age- and verbal ability-matched comparison children. Both children with and without ASD showed an "enactment effect", demonstrating significantly better recognition and source memory for self-performed actions than other-person-performed actions. Within the comparison group, theory-of-mind (ToM) task performance was significantly correlated with source memory, specifically for other-person-performed actions (after statistically controlling for verbal ability). Within the ASD group, ToM task performance was not significantly correlated with source memory (after controlling for verbal ability). Possible explanations for these relations between source memory and ToM are considered.
Navon letters affect face learning and face retrieval.
Lewis, Michael B; Mills, Claire; Hills, Peter J; Weston, Nicola
2009-01-01
Identifying the local letters of a Navon letter (a large letter made up of smaller different letters) prior to recognition causes impairment in accuracy, while identifying the global letters of a Navon letter causes an enhancement in recognition accuracy (Macrae & Lewis, 2002). This effect may result from a transfer-inappropriate processing shift (TIPS) (Schooler, 2002). The present experiment extends research on the underlying mechanism of this effect by exploring this Navon effect on face learning as well as face recognition. The results of the two experiments revealed that when the Navon task used at retrieval was the same as that used at encoding then the performance accuracy is enhanced, whereas when the processing operations mismatch at retrieval and at encoding, this impairs recognition accuracy. These results provide support for the TIPS explanation of the Navon effect.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
How Cross-Language Similarity and Task Demands Affect Cognate Recognition
ERIC Educational Resources Information Center
Dijkstra, Ton; Miwa, Koji; Brummelhuis, Bianca; Sappelli, Maya; Baayen, Harald
2010-01-01
This study examines how the cross-linguistic similarity of translation equivalents affects bilingual word recognition. Performing one of three tasks, Dutch-English bilinguals processed cognates with varying degrees of form overlap between their English and Dutch counterparts (e.g., "lamp-lamp" vs. "flood-vloed" vs. "song-lied"). In lexical…
The image-interpretation-workstation of the future: lessons learned
NASA Astrophysics Data System (ADS)
Maier, S.; van de Camp, F.; Hafermann, J.; Wagner, B.; Peinsipp-Byma, E.; Beyerer, J.
2017-05-01
In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.
The aftermath of memory retrieval for recycling visual working memory representations.
Park, Hyung-Bum; Zhang, Weiwei; Hyun, Joo-Seok
2017-07-01
We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)-namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the "recycling" of VWM representations.
Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve
The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.
A multimodal approach to emotion recognition ability in autism spectrum disorders.
Jones, Catherine R G; Pickles, Andrew; Falcaro, Milena; Marsden, Anita J S; Happé, Francesca; Scott, Sophie K; Sauter, Disa; Tregay, Jenifer; Phillips, Rebecca J; Baird, Gillian; Simonoff, Emily; Charman, Tony
2011-03-01
Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal, hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (≥ 80 vs. < 80). We found no evidence of a fundamental emotion recognition deficit in the ASD group and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD. © 2010 The Authors. Journal of Child Psychology and Psychiatry © 2010 Association for Child and Adolescent Mental Health.
Andoh, Jamila; Paus, Tomás
2011-02-01
Repetitive TMS (rTMS) provides a noninvasive tool for modulating neural activity in the human brain. In healthy participants, rTMS applied over the language-related areas in the left hemisphere, including the left posterior temporal area of Wernicke (LTMP) and inferior frontal area of Broca, have been shown to affect performance on word recognition tasks. To investigate the neural substrate of these behavioral effects, off-line rTMS was combined with fMRI acquired during the performance of a word recognition task. Twenty right-handed healthy men underwent fMRI scans before and after a session of 10-Hz rTMS applied outside the magnetic resonance scanner. Functional magnetic resonance images were acquired during the performance of a word recognition task that used English or foreign-language words. rTMS was applied over the LTMP in one group of 10 participants (LTMP group), whereas the homologue region in the right hemisphere was stimulated in another group of 10 participants (RTMP group). Changes in task-related fMRI response (English minus foreign languages) and task performances (response time and accuracy) were measured in both groups and compared between pre-rTMS and post-rTMS. Our results showed that rTMS increased task-related fMRI response in the homologue areas contralateral to the stimulated sites. We also found an effect of rTMS on response time for the LTMP group only. These findings provide insights into changes in neural activity in cortical regions connected to the stimulated site and are consistent with a hypothesis raised in a previous review about the role of the homologue areas in the contralateral hemisphere for preserving behavior after neural interference.
Lalanne, Jennifer; Rozenberg, Johanna; Grolleau, Pauline; Piolino, Pascale
2013-12-01
The Self-reference effect (SRE) on long-term episodic memory and autonoetic consciousness has been investigated in young adults, scarcely in older adults, but never in Alzheimer's patients. Is the functional influence of Selfreference still present when the individual's memory and identity are impaired? We investigated this issue in 60 young subjects, 41 elderly subjects, and 28 patients with Alzheimer's disease, by using 1) an incidental learning task of personality traits in three encoding conditions, inducing variable degrees of depth of processing and personal involvement, 2) a 2- minute retention interval free recall task, and 3) a 20-minute delayed recognition task, combined with a remember-know paradigm. Each recorded score was corrected for errors (intrusions in free recall, false alarms in recognition, and false source memory in remember responses). Compared with alternative encodings, the Self-reference significantly enhanced performance on the free recall task in the young group, and on the recognition task both in the young and older groups but not in the Alzheimer group. The most important finding in the Alzheimer group is that the Self-reference led the most often to a subjective sense of remembering (especially for the positive words) with the retrieval of the correct encoding source. This Self-reference recollection effect in patients was related to independent subjective measures of a positive and definite sense of Self (measured by the Tennessee Self Concept Scale), and to memory complaints in daily life. In conclusion, these results demonstrated the power and robustness of the Self-reference effect on recollection in long-term episodic memory in Alzheimer's disease, albeit the retrieval is considerably reduced. These results should open new perspectives for the development of rehabilitation programs for memory deficits.
Recognition-induced forgetting is not due to category-based set size.
Maxcey, Ashleigh M
2016-01-01
What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.
Scene recognition based on integrating active learning with dictionary learning
NASA Astrophysics Data System (ADS)
Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen
2018-04-01
Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.
Word-to-picture recognition is a function of motor components mappings at the stage of retrieval.
Brouillet, Denis; Brouillet, Thibaut; Milhau, Audrey; Heurley, Loïc; Vagnot, Caroline; Brunel, Lionel
2016-10-01
Embodied approaches of cognition argue that retrieval involves the re-enactment of both sensory and motor components of the desired remembering. In this study, we investigated the effect of motor action performed to produce the response in a recognition task when this action is compatible with the affordance of the objects that have to be recognised. In our experiment, participants were first asked to learn a list of words referring to graspable objects, and then told to make recognition judgements on pictures. The pictures represented objects where the graspable part was either pointing to the same or to the opposite side of the "Yes" response key. Results show a robust effect of compatibility between objects affordance and response hand. Moreover, this compatibility improves participants' ability of discrimination, suggesting that motor components are relevant cue for memory judgement at the stage of retrieval in a recognition task. More broadly, our data highlight that memory judgements are a function of motor components mappings at the stage of retrieval. © 2015 International Union of Psychological Science.
Quantifying facial expression recognition across viewing conditions.
Goren, Deborah; Wilson, Hugh R
2006-04-01
Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.
Ragland, J. Daniel; Ranganath, Charan; Harms, Michael P.; Barch, Deanna M.; Gold, James M.; Layher, Evan; Lesh, Tyler A.; MacDonald, Angus W.; Niendam, Tara A.; Phillips, Joshua; Silverstein, Steven M.; Yonelinas, Andrew P.; Carter, Cameron S.
2015-01-01
Importance Individuals with schizophrenia (SZ) can encode item-specific information to support familiarity-based recognition, but are disproportionately impaired encoding inter-item relationships (relational encoding) and recollecting information. The Relational and Item-Specific Encoding (RiSE) paradigm has been used to disentangle these encoding and retrieval processes, which may be dependent on specific medial temporal lobe (MTL) and prefrontal cortex (PFC) subregions. Functional imaging during RiSE task performance could help to specify dysfunctional neural circuits in SZ that can be targeted for interventions to improve memory and functioning in the illness. Objectives To use functional magnetic resonance imaging (fMRI) to test the hypothesis that SZ disproportionately affects MTL and PFC subregions during relational encoding and retrieval, relative to item-specific memory processes. Imaging results from healthy comparison subjects (HC) will also be used to establish neural construct validity for RiSE. Design, Setting, and Participants This multi-site, case-control, cross-sectional fMRI study was conducted at five CNTRACS sites. The final sample included 52 clinically stable outpatients with SZ, and 57 demographically matched HC. Main Outcomes and Measures Behavioral performance speed and accuracy (d’) on item recognition and associative recognition tasks. Voxelwise statistical parametric maps for a priori MTL and PFC regions of interest (ROI), testing activation differences between relational and item-specific memory during encoding and retrieval. Results Item recognition was disproportionately impaired in SZ patients relative to controls following relational encoding. The differential deficit was accompanied by reduced dorsolateral prefrontal cortex (DLPFC) activation during relational encoding in SZ, relative to HC. Retrieval success (hits > misses) was associated with hippocampal (HI) activation in HC during relational item recognition and associative recognition conditions, and HI activation was specifically reduced in SZ for recognition of relational but not item-specific information. Conclusions In this unique, multi-site fMRI study, HC results supported RiSE construct validity by revealing expected memory effects in PFC and MTL subregions during encoding and retrieval. Comparison of SZ and HC revealed disproportionate memory deficits in SZ for relational versus item-specific information, accompanied by regionally and functionally specific deficits in DLPFC and HI activation. PMID:26200928
Dai, Hong-Jie; Lai, Po-Ting; Chang, Yung-Chun; Tsai, Richard Tzong-Han
2015-01-01
The functions of chemical compounds and drugs that affect biological processes and their particular effect on the onset and treatment of diseases have attracted increasing interest with the advancement of research in the life sciences. To extract knowledge from the extensive literatures on such compounds and drugs, the organizers of BioCreative IV administered the CHEMical Compound and Drug Named Entity Recognition (CHEMDNER) task to establish a standard dataset for evaluating state-of-the-art chemical entity recognition methods. This study introduces the approach of our CHEMDNER system. Instead of emphasizing the development of novel feature sets for machine learning, this study investigates the effect of various tag schemes on the recognition of the names of chemicals and drugs by using conditional random fields. Experiments were conducted using combinations of different tokenization strategies and tag schemes to investigate the effects of tag set selection and tokenization method on the CHEMDNER task. This study presents the performance of CHEMDNER of three more representative tag schemes-IOBE, IOBES, and IOB12E-when applied to a widely utilized IOB tag set and combined with the coarse-/fine-grained tokenization methods. The experimental results thus reveal that the fine-grained tokenization strategy performance best in terms of precision, recall and F-scores when the IOBES tag set was utilized. The IOBES model with fine-grained tokenization yielded the best-F-scores in the six chemical entity categories other than the "Multiple" entity category. Nonetheless, no significant improvement was observed when a more representative tag schemes was used with the coarse or fine-grained tokenization rules. The best F-scores that were achieved using the developed system on the test dataset of the CHEMDNER task were 0.833 and 0.815 for the chemical documents indexing and the chemical entity mention recognition tasks, respectively. The results herein highlight the importance of tag set selection and the use of different tokenization strategies. Fine-grained tokenization combined with the tag set IOBES most effectively recognizes chemical and drug names. To the best of the authors' knowledge, this investigation is the first comprehensive investigation use of various tag set schemes combined with different tokenization strategies for the recognition of chemical entities.
Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance.
Stahl, Johanna; Wiese, Holger; Schweinberger, Stefan R
2010-06-01
People are generally better in recognizing faces from their own ethnic group as opposed to faces from another ethnic group, a finding which has been interpreted in the context of two opposing theories. Whereas perceptual expertise theories stress the role of long-term experience with one's own ethnic group, race feature theories assume that the processing of an other-race-defining feature triggers inferior coding and recognition of faces. The present study tested these hypotheses by manipulating the learning task in a recognition memory test. At learning, one group of participants categorized faces according to ethnicity, whereas another group rated facial attractiveness. Subsequent recognition tests indicated clear and similar own-race biases for both groups. However, ERPs from learning and test phases demonstrated an influence of learning task on neurophysiological processing of own- and other-race faces. While both groups exhibited larger N170 responses to Asian as compared to Caucasian faces, task-dependent differences were seen in a subsequent P2 ERP component. Whereas the P2 was more pronounced for Caucasian faces in the categorization group, this difference was absent in the attractiveness rating group. The learning task thus influences early face encoding. Moreover, comparison with recent research suggests that this attractiveness rating task influences the processes reflected in the P2 in a similar manner as perceptual expertise for other-race faces does. By contrast, the behavioural own-race bias suggests that long-term expertise is required to increase other-race face recognition and hence attenuate the own-race bias. Copyright 2010 Elsevier Ltd. All rights reserved.
Le Berre, Anne-Pascale; Pinon, Karine; Vabret, François; Pitel, Anne-Lise; Allain, Philippe; Eustache, Francis; Beaunieux, Hélène
2010-11-01
Alcoholism affects various cognitive processes, including components of memory. Metamemory, though of particular interest for patient treatment, has not yet been extensively investigated. A feeling-of-knowing (FOK) measure of metamemory was administered to 28 alcoholic patients and 28 healthy controls during an episodic memory task including the learning of 20 pairs of items, followed by a 20-minute delayed recall and a recognition task. Prior to recognition, participants rated their ability to recognize each nonrecalled word among 4 items. This episodic FOK measure served to compare predictions of future recognition performance and actual recognition performance. Furthermore, a subjective measure of metamemory, the Metamemory In Adulthood (MIA) questionnaire, was completed by patients and controls. This assessment of alcoholic patients' metamemory profile was accompanied by an evaluation of episodic memory and executive functioning. FOK results revealed deficits in accuracy, with the alcoholic patients providing overestimations. There were also links between FOK inaccuracy, executive decline, and episodic memory impairment in patients. MIA results showed that although alcoholics did display memory difficulties, they did not differ from controls on questions about memory capacity. Chronic alcoholism affects both episodic memory and metamemory for novel information. Patients were relatively unaware of their memory deficits and believed that their memory was as good as that of the healthy controls. The monitoring measure (FOK) and the subjective measure of metamemory (MIA) showed that patients with chronic alcoholism overestimated their memory capacities. Episodic memory deficit and executive dysfunction would explain metamemory decline in this clinical population. Copyright © 2010 by the Research Society on Alcoholism.
Effects of post-encoding stress on performance in the DRM false memory paradigm
Pardilla-Delgado, Enmanuelle; Alger, Sara E.; Cunningham, Tony J.; Kinealy, Brian
2016-01-01
Numerous studies have investigated how stress impacts veridical memory, but how stress influences false memory formation remains poorly understood. In order to target memory consolidation specifically, a psychosocial stress (TSST) or control manipulation was administered following encoding of 15 neutral, semantically related word lists (DRM false memory task) and memory was tested 24 h later. Stress decreased recognition of studied words, while increasing false recognition of semantically related lure words. Moreover, while control subjects remembered true and false words equivalently, stressed subjects remembered more false than true words. These results suggest that stress supports gist memory formation in the DRM task, perhaps by hindering detail-specific processing in the hippocampus. PMID:26670187
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
Handwritten digits recognition based on immune network
NASA Astrophysics Data System (ADS)
Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe
2011-11-01
With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.
Effect of minimal/mild hearing loss on children's speech understanding in a simulated classroom.
Lewis, Dawna E; Valente, Daniel L; Spalding, Jody L
2015-01-01
While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.
NASA Astrophysics Data System (ADS)
Syryamkim, V. I.; Kuznetsov, D. N.; Kuznetsova, A. S.
2018-05-01
Image recognition is an information process implemented by some information converter (intelligent information channel, recognition system) having input and output. The input of the system is fed with information about the characteristics of the objects being presented. The output of the system displays information about which classes (generalized images) the recognized objects are assigned to. When creating and operating an automated system for pattern recognition, a number of problems are solved, while for different authors the formulations of these tasks, and the set itself, do not coincide, since it depends to a certain extent on the specific mathematical model on which this or that recognition system is based. This is the task of formalizing the domain, forming a training sample, learning the recognition system, reducing the dimensionality of space.
Specific Impairments in the Recognition of Emotional Facial Expressions in Parkinson’s Disease
Clark, Uraina S.; Neargarder, Sandy; Cronin-Golomb, Alice
2008-01-01
Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD. PMID:18485422
Neuroanatomical substrates involved in unrelated false facial recognition.
Ronzon-Gonzalez, Eliane; Hernandez-Castillo, Carlos R; Pasaye, Erick H; Vaca-Palomares, Israel; Fernandez-Ruiz, Juan
2017-11-22
Identifying faces is a process central for social interaction and a relevant factor in eyewitness theory. False recognition is a critical mistake during an eyewitness's identification scenario because it can lead to a wrongful conviction. Previous studies have described neural areas related to false facial recognition using the standard Deese/Roediger-McDermott (DRM) paradigm, triggering related false recognition. Nonetheless, misidentification of faces without trying to elicit false memories (unrelated false recognition) in a police lineup could involve different cognitive processes, and distinct neural areas. To delve into the neural circuitry of unrelated false recognition, we evaluated the memory and response confidence of participants while watching faces photographs in an fMRI task. Functional activations of unrelated false recognition were identified by contrasting the activation on this condition vs. the activations related to recognition (hits) and correct rejections. The results identified the right precentral and cingulate gyri as areas with distinctive activations during false recognition events suggesting a conflict resulting in a dysfunction during memory retrieval. High confidence suggested that about 50% of misidentifications may be related to an unconscious process. These findings add to our understanding of the construction of facial memories and its biological basis, and the fallibility of the eyewitness testimony.
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Two processes support visual recognition memory in rhesus monkeys.
Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer
2011-11-29
A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans.
Two processes support visual recognition memory in rhesus monkeys
Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer
2011-01-01
A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans. PMID:22084079
Pitsikas, Nikolaos; Sakellaridis, Nikolaos
2007-10-01
The effects of the non-competitive N-methyl-D-aspartate (NMDA) receptor antagonist memantine on recognition memory were investigated in the rat by using the object recognition task. In addition, a possible interaction between memantine and the nitric oxide (NO) donor molsidomine in antagonizing extinction of recognition memory was also evaluated utilizing the same behavioral procedure. In a first dose-response study, post-training administration of memantine (10 and 20, but not 3 mg/kg) antagonized recognition memory deficits in the rat, suggesting that memantine modulates storage and/or retrieval of information. In a subsequent study, combination of sub-threshold doses of memantine (3 mg/kg) and the NO donor molsidomine (1 mg/kg) counteracted delay-dependent impairments in the same task. Neither memantine (3 mg/kg) nor molsidomine (1 mg/kg) alone reduced object recognition performance deficits. The present findings indicate a) that memantine is involved in recognition memory and b) support a functional interaction between memantine and molsidomine on recognition memory mechanisms.
Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
2015-09-01
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Hargreaves, Ian S; Pexman, Penny M
2014-05-01
According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie
2014-01-01
Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087
Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients
ERIC Educational Resources Information Center
Peng, Shu-Chen; Lu, Hui-Ping; Lu, Nelson; Lin, Yung-Song; Deroche, Mickael L. D.; Chatterjee, Monita
2017-01-01
Purpose: The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. Method: Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally…
Visual Recognition Memory, Paired-Associate Learning, and Reading Achievement.
ERIC Educational Resources Information Center
Anderson, Roger H.; Samuels, S. Jay
The relationship between visual recognition memory and performance on a paired-associate task for good and poor readers was investigated. Subjects were three groups of 21, 21, and 22 children each, with mean IQ's of 98.2, 108.1, and 118.0, respectively. Three experimental tasks, individually administered to each subject, measured visual…
Functional evaluation of out-of-the-box text-mining tools for data-mining tasks
Jung, Kenneth; LePendu, Paea; Iyer, Srinivasan; Bauer-Mehren, Anna; Percha, Bethany; Shah, Nigam H
2015-01-01
Objective The trade-off between the speed and simplicity of dictionary-based term recognition and the richer linguistic information provided by more advanced natural language processing (NLP) is an area of active discussion in clinical informatics. In this paper, we quantify this trade-off among text processing systems that make different trade-offs between speed and linguistic understanding. We tested both types of systems in three clinical research tasks: phase IV safety profiling of a drug, learning adverse drug–drug interactions, and learning used-to-treat relationships between drugs and indications. Materials We first benchmarked the accuracy of the NCBO Annotator and REVEAL in a manually annotated, publically available dataset from the 2008 i2b2 Obesity Challenge. We then applied the NCBO Annotator and REVEAL to 9 million clinical notes from the Stanford Translational Research Integrated Database Environment (STRIDE) and used the resulting data for three research tasks. Results There is no significant difference between using the NCBO Annotator and REVEAL in the results of the three research tasks when using large datasets. In one subtask, REVEAL achieved higher sensitivity with smaller datasets. Conclusions For a variety of tasks, employing simple term recognition methods instead of advanced NLP methods results in little or no impact on accuracy when using large datasets. Simpler dictionary-based methods have the advantage of scaling well to very large datasets. Promoting the use of simple, dictionary-based methods for population level analyses can advance adoption of NLP in practice. PMID:25336595
Zhang, Rui-san; Xu, Hong-jiao; Jiang, Jin-hong; Han, Ren-wen; Chang, Min; Peng, Ya-li; Wang, Yuan; Wang, Rui
2015-12-10
A growing body of evidence suggests that the agglomeration of amyloid-β (Aβ) may be a trigger for Alzheimer׳s disease (AD). Central infusion of Aβ42 can lead to memory impairment in mice. Inhibiting the aggregation of Aβ has been considered a therapeutic strategy for AD. Endomorphin-1 (EM-1), an endogenous agonist of μ-opioid receptors, has been shown to inhibit the aggregation of Aβ in vitro. In the present study, we investigated whether EM-1 could alleviate the memory-impairing effects of Aβ42 in mice using novel object recognition (NOR) and object location recognition (OLR) tasks. We showed that co-administration of EM-1 was able to ameliorate Aβ42-induced amnesia in the lateral ventricle and the hippocampus, and these effects could not be inhibited by naloxone, an antagonist of μ-opioid receptors. Infusion of EM-1 or naloxone separately into the lateral ventricle had no influence on memory in the tasks. These results suggested that EM-1 might be effective as a drug for AD preventative treatment by inhibiting Aβ aggregation directly as a molecular modifier. Copyright © 2015 Elsevier B.V. All rights reserved.
The short- and long-term consequences of directed forgetting in a working memory task.
Festini, Sara B; Reuter-Lorenz, Patricia A
2013-01-01
Directed forgetting requires the voluntary control of memory. Whereas many studies have examined directed forgetting in long-term memory (LTM), the mechanisms and effects of directed forgetting within working memory (WM) are less well understood. The current study tests how directed forgetting instructions delivered in a WM task influence veridical memory, as well as false memory, over the short and long term. In a modified item recognition task Experiment 1 tested WM only and demonstrated that directed forgetting reduces false recognition errors and semantic interference. Experiment 2 replicated these WM effects and used a surprise LTM recognition test to assess the long-term effects of directed forgetting in WM. Long-term veridical memory for to-be-remembered lists was better than memory for to-be-forgotten lists-the directed forgetting effect. Moreover, fewer false memories emerged for to-be-forgotten information than for to-be-remembered information in LTM as well. These results indicate that directed forgetting during WM reduces semantic processing of to-be-forgotten lists over the short and long term. Implications for theories of false memory and the mechanisms of directed forgetting within working memory are discussed.
Cultural differences in visual object recognition in 3-year-old children
Kuwabara, Megumi; Smith, Linda B.
2016-01-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576
Cultural differences in visual object recognition in 3-year-old children.
Kuwabara, Megumi; Smith, Linda B
2016-07-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.
Local Navon letter processing affects skilled behavior: a golf-putting experiment.
Lewis, Michael B; Dawkins, Gemma
2015-04-01
Expert or skilled behaviors (for example, face recognition or sporting performance) are typically performed automatically and with little conscious awareness. Previous studies, in various domains of performance, have shown that activities immediately prior to a task demanding a learned skill can affect performance. In sport, describing the to-be-performed action is detrimental, whereas in face recognition, describing a face or reading local Navon letters is detrimental. Two golf-putting experiments are presented that compare the effects that these three tasks have on experienced and novice golfers. Experiment 1 found a Navon effect on golf performance for experienced players. Experiment 2 found, for experienced players only, that performance was impaired following the three tasks described above, when compared with reading or global Navon tasks. It is suggested that the three tasks affect skilled performance by provoking a shift from automatic behavior to a more analytic style. By demonstrating similarities between effects in face recognition and sporting behavior, it is hoped to better understand concepts in both fields.
Recognition and classification of colon cells applying the ensemble of classifiers.
Kruk, M; Osowski, S; Koktysz, R
2009-02-01
The paper presents the application of an ensemble of classifiers for the recognition of colon cells on the basis of the microscope colon image. The solved task include: segmentation of the individual cells from the image using the morphological operations, the preprocessing stages, leading to the extraction of features, selection of the most important features, and the classification stage applying the classifiers arranged in the form of ensemble. The paper presents and discusses the results concerning the recognition of four most important colon cell types: eosinophylic granulocyte, neutrophilic granulocyte, lymphocyte and plasmocyte. The proposed system is able to recognize the cells with the accuracy comparable to the human expert (around 5% of discrepancy of both results).
Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S
2013-01-01
In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.
Face Encoding and Recognition in the Human Brain
NASA Astrophysics Data System (ADS)
Haxby, James V.; Ungerleider, Leslie G.; Horwitz, Barry; Maisog, Jose Ma.; Rapoport, Stanley I.; Grady, Cheryl L.
1996-01-01
A dissociation between human neural systems that participate in the encoding and later recognition of new memories for faces was demonstrated by measuring memory task-related changes in regional cerebral blood flow with positron emission tomography. There was almost no overlap between the brain structures associated with these memory functions. A region in the right hippocampus and adjacent cortex was activated during memory encoding but not during recognition. The most striking finding in neocortex was the lateralization of prefrontal participation. Encoding activated left prefrontal cortex, whereas recognition activated right prefrontal cortex. These results indicate that the hippocampus and adjacent cortex participate in memory function primarily at the time of new memory encoding. Moreover, face recognition is not mediated simply by recapitulation of operations performed at the time of encoding but, rather, involves anatomically dissociable operations.
The “parts and wholes” of face recognition: a review of the literature
Tanaka, James W.; Simonyi, Diana
2016-01-01
It has been claimed that faces are recognized as a “whole” rather than the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The “whole face” or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a “whole” stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing. PMID:26886495
The "parts and wholes" of face recognition: A review of the literature.
Tanaka, James W; Simonyi, Diana
2016-10-01
It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing.
Geometry and Gesture-Based Features from Saccadic Eye-Movement as a Biometric in Radiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Tracy; Tourassi, Georgia; Yoon, Hong-Jun
In this study, we present a novel application of sketch gesture recognition on eye-movement for biometric identification and estimating task expertise. The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus for this study. Sketch gesture recognition techniques were employed to extract geometric and gesture-based features from saccadic eye-movements. Our results show that saccadic eye-movement, characterizedmore » using sketch-based features, result in more accurate models for predicting individual identity and level of expertise than more traditional eye-tracking features.« less
Contribution of hearing aids to music perception by cochlear implant users.
Peterson, Nathaniel; Bergeson, Tonya R
2015-09-01
Modern cochlear implant (CI) encoding strategies represent the temporal envelope of sounds well but provide limited spectral information. This deficit in spectral information has been implicated as a contributing factor to difficulty with speech perception in noisy conditions, discriminating between talkers and melody recognition. One way to supplement spectral information for CI users is by fitting a hearing aid (HA) to the non-implanted ear. In this study 14 postlingually deaf adults (half with a unilateral CI and the other half with a CI and an HA (CI + HA)) were tested on measures of music perception and familiar melody recognition. CI + HA listeners performed significantly better than CI-only listeners on all pitch-based music perception tasks. The CI + HA group did not perform significantly better than the CI-only group in the two tasks that relied on duration cues. Recognition of familiar melodies was significantly enhanced for the group wearing an HA in addition to their CI. This advantage in melody recognition was increased when melodic sequences were presented with the addition of harmony. These results show that, for CI recipients with aidable hearing in the non-implanted ear, using a HA in addition to their implant improves perception of musical pitch and recognition of real-world melodies.
Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation
Kunert, Richard; Scheepers, Christoph
2014-01-01
Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition. PMID:25346708
Component-based target recognition inspired by human vision
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Agyepong, Kwabena
2009-05-01
In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Cross-modal working memory binding and word recognition skills: how specific is the link?
Wang, Shinmin; Allen, Richard J
2018-04-01
Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.
False recall and recognition of brand names increases over time.
Sherman, Susan M
2013-01-01
Using the Deese-Roediger-McDermott (DRM) paradigm, participants are presented with lists of associated words (e.g., bed, awake, night). Subsequently, they reliably have false memories for related but nonpresented words (e.g., SLEEP). Previous research has found that false memories can be created for brand names (e.g., Morrisons, Sainsbury's, Waitrose, and TESCO). The present study investigates the effect of a week's delay on false memories for brand names. Participants were presented with lists of brand names followed by a distractor task. In two between-subjects experiments, participants completed a free recall task or a recognition task either immediately or a week later. In two within-subjects experiments, participants completed a free recall task or a recognition task both immediately and a week later. Correct recall for presented list items decreased over time, whereas false recall for nonpresented lure items increased. For recognition, raw scores revealed an increase in false memory across time reflected in an increase in Remember responses. Analysis of Pr scores revealed that false memory for lures stayed constant over a week, but with an increase in Remember responses in the between-subjects experiment and a trend in the same direction in the within-subjects experiment. Implications for theories of false memory are discussed.
Explicit and spontaneous retrieval of emotional scenes: electrophysiological correlates.
Weymar, Mathias; Bradley, Margaret M; El-Hinnawi, Nasryn; Lang, Peter J
2013-10-01
When event-related potentials (ERP) are measured during a recognition task, items that have previously been presented typically elicit a larger late (400-800 ms) positive potential than new items. Recent data, however, suggest that emotional, but not neutral, pictures show ERP evidence of spontaneous retrieval when presented in a free-viewing task (Ferrari, Bradley, Codispoti, Karlsson, & Lang, 2012). In two experiments, we further investigated the brain dynamics of implicit and explicit retrieval. In Experiment 1, brain potentials were measured during a semantic categorization task, which did not explicitly probe episodic memory, but which, like a recognition task, required an active decision and a button press, and were compared to those elicited during recognition and free viewing. Explicit recognition prompted a late enhanced positivity for previously presented, compared with new, pictures regardless of hedonic content. In contrast, only emotional pictures showed an old-new difference when the task did not explicitly probe episodic memory, either when making an active categorization decision regarding picture content, or when simply viewing pictures. In Experiment 2, however, neutral pictures did prompt a significant old-new ERP difference during subsequent free viewing when emotionally arousing pictures were not included in the encoding set. These data suggest that spontaneous retrieval is heightened for salient cues, perhaps reflecting heightened attention and elaborative processing at encoding.
Explicit and spontaneous retrieval of emotional scenes: Electrophysiological correlates
Weymar, Mathias; Bradley, Margaret M.; El-Hinnawi, Nasryn; Lang, Peter J.
2014-01-01
When event-related potentials are measured during a recognition task, items that have previously been presented typically elicit a larger late (400–800 ms) positive potential than new items. Recent data, however, suggest that emotional, but not neutral, pictures show ERP evidence of spontaneous retrieval when presented in a free-viewing task (Ferrari, Bradley, Codispoti & Lang, 2012). In two experiments, we further investigated the brain dynamics of implicit and explicit retrieval. In Experiment 1, brain potentials were measured during a semantic categorization task, which did not explicitly probe episodic memory, but which, like a recognition task, required an active decision and a button press, and were compared to those elicited during recognition and free viewing. Explicit recognition prompted a late enhanced positivity for previously presented, compared to new, pictures regardless of hedonic content. In contrast, only emotional pictures showed an old-new difference when the task did not explicitly probe episodic memory, either when either making an active categorization decision regarding picture content, or when simply viewing pictures. In Experiment 2, however, neutral pictures did prompt a significant old-new ERP difference during subsequent free viewing when emotionally arousing pictures were not included in the encoding set. These data suggest that spontaneous retrieval is heightened for salient cues, perhaps reflecting heightened attention and elaborative processing at encoding. PMID:23795588
Prefrontal Engagement during Source Memory Retrieval Depends on the Prior Encoding Task
Kuo, Trudy Y.; Van Petten, Cyma
2008-01-01
The prefrontal cortex is strongly engaged by some, but not all, episodic memory tests. Prior work has shown that source recognition tests—those that require memory for conjunctions of studied attributes—yield deficient performance in patients with prefrontal damage and greater prefrontal activity in healthy subjects, as compared to simple recognition tests. Here, we tested the hypothesis that there is no intrinsic relationship between the prefrontal cortex and source memory, but that the prefrontal cortex is engaged by the demand to retrieve weakly encoded relationships. Subjects attempted to remember object/color conjunctions after an encoding task that focused on object identity alone, and an integrative encoding task that encouraged attention to object/color relationships. After the integrative encoding task, the late prefrontal brain electrical activity that typically occurs in source memory tests was eliminated. Earlier brain electrical activity related to successful recognition of the objects was unaffected by the nature of prior encoding. PMID:16839287
Test-retest reliability and task order effects of emotional cognitive tests in healthy subjects.
Adams, Thomas; Pounder, Zoe; Preston, Sally; Hanson, Andy; Gallagher, Peter; Harmer, Catherine J; McAllister-Williams, R Hamish
2016-11-01
Little is known of the retest reliability of emotional cognitive tasks or the impact of using different tasks employing similar emotional stimuli within a battery. We investigated this in healthy subjects. We found improved overall performance in an emotional attentional blink task (EABT) with repeat testing at one hour and one week compared to baseline, but the impact of an emotional stimulus on performance was unchanged. Similarly, performance on a facial expression recognition task (FERT) was better one week after a baseline test, though the relative effect of specific emotions was unaltered. There was no effect of repeat testing on an emotional word categorising, recall and recognition task. We found no difference in performance in the FERT and EABT irrespective of task order. We concluded that it is possible to use emotional cognitive tasks in longitudinal studies and combine tasks using emotional facial stimuli in a single battery.
Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.
Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W
2006-08-01
Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.
Memory Asymmetry of Forward and Backward Associations in Recognition Tasks
ERIC Educational Resources Information Center
Yang, Jiongjiong; Zhao, Peng; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han
2013-01-01
There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiments 1-2) or pairs (Experiments 3-6) during the study phase. They then recalled…
ERIC Educational Resources Information Center
Boot, Inge; Pecher, Diane
2008-01-01
Many models of word recognition predict that neighbours of target words will be activated during word processing. Cascaded models can make the additional prediction that semantic features of those neighbours get activated before the target has been uniquely identified. In two semantic decision tasks neighbours that were congruent (i.e., from the…
ERIC Educational Resources Information Center
Hsiao, Janet H.; Lam, Sze Man
2013-01-01
Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…
Ambiguity and Relatedness Effects in Semantic Tasks: Are They Due to Semantic Coding?
ERIC Educational Resources Information Center
Hino, Yasushi; Pexman, Penny M.; Lupker, Stephen J.
2006-01-01
According to parallel distributed processing (PDP) models of visual word recognition, the speed of semantic coding is modulated by the nature of the orthographic-to-semantic mappings. Consistent with this idea, an ambiguity disadvantage and a relatedness-of-meaning (ROM) advantage have been reported in some word recognition tasks in which semantic…
Enrici, Ivan; Adenzato, Mauro; Ardito, Rita B.; Mitkova, Antonia; Cavallo, Marco; Zibetti, Maurizio; Lopiano, Leonardo; Castelli, Lorys
2015-01-01
Background Parkinson’s disease (PD) is characterised by well-known motor symptoms, whereas the presence of cognitive non-motor symptoms, such as emotional disturbances, is still underestimated. One of the major problems in studying emotion deficits in PD is an atomising approach that does not take into account different levels of emotion elaboration. Our study addressed the question of whether people with PD exhibit difficulties in one or more specific dimensions of emotion processing, investigating three different levels of analyses, that is, recognition, representation, and regulation. Methodology Thirty-two consecutive medicated patients with PD and 25 healthy controls were enrolled in the study. Participants performed a three-level analysis assessment of emotional processing using quantitative standardised emotional tasks: the Ekman 60-Faces for emotion recognition, the full 36-item version of the Reading the Mind in the Eyes (RME) for emotion representation, and the 20-item Toronto Alexithymia Scale (TAS-20) for emotion regulation. Principal Findings Regarding emotion recognition, patients obtained significantly worse scores than controls in the total score of Ekman 60-Faces but not in any other basic emotions. For emotion representation, patients obtained significantly worse scores than controls in the RME experimental score but no in the RME gender control task. Finally, on emotion regulation, PD and controls did not perform differently at TAS-20 and no specific differences were found on TAS-20 subscales. The PD impairments on emotion recognition and representation do not correlate with dopamine therapy, disease severity, or with the duration of illness. These results are independent from other cognitive processes, such as global cognitive status and executive function, or from psychiatric status, such as depression, anxiety or apathy. Conclusions These results may contribute to better understanding of the emotional problems that are often seen in patients with PD and the measures used to test these problems, in particular on the use of different versions of the RME task. PMID:26110271
Dewhurst, Stephen A; Knott, Lauren M
2010-12-01
Five experiments investigated the encoding-retrieval match in recognition memory by manipulating read and generate conditions at study and at test. Experiments 1A and 1B confirmed previous findings that reinstating encoding operations at test enhances recognition accuracy in a within-groups design but reduces recognition accuracy in a between-groups design. Experiment 2A showed that generating from anagrams at study and at test enhanced recognition accuracy even when study and test items were generated from different anagrams. Experiment 2B showed that switching from one generation task at study (e.g., anagram solution) to a different generation task at test (e.g., fragment completion) eliminated this recognition advantage. Experiment 3 showed that the recognition advantage found in Experiment 1A is reliably present up to 1 week after study. The findings are consistent with theories of memory that emphasize the importance of the match between encoding and retrieval operations.
Jurado-Berbel, Patricia; Costa-Miserachs, David; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Portell-Cortés, Isabel
2010-02-11
The present work examined whether post-training systemic epinephrine (EPI) is able to modulate short-term (3h) and long-term (24 h and 48 h) memory of standard object recognition, as well as long-term (24 h) memory of separate "what" (object identity) and "where" (object location) components of object recognition. Although object recognition training is associated to low arousal levels, all the animals received habituation to the training box in order to further reduce emotional arousal. Post-training EPI improved long-term (24 h and 48 h), but not short-term (3 h), memory in the standard object recognition task, as well as 24 h memory for both object identity and object location. These data indicate that post-training epinephrine: (1) facilitates long-term memory for standard object recognition; (2) exerts separate facilitatory effects on "what" (object identity) and "where" (object location) components of object recognition; and (3) is capable of improving memory for a low arousing task even in highly habituated rats.
Pope, Sarah M; Russell, Jamie L; Hopkins, William D
2015-01-01
Imitation recognition provides a viable platform from which advanced social cognitive skills may develop. Despite evidence that non-human primates are capable of imitation recognition, how this ability is related to social cognitive skills is unknown. In this study, we compared imitation recognition performance, as indicated by the production of testing behaviors, with performance on a series of tasks that assess social and physical cognition in 49 chimpanzees. In the initial analyses, we found that males were more responsive than females to being imitated and engaged in significantly greater behavior repetitions and testing sequences. We also found that subjects who consistently recognized being imitated performed better on social but not physical cognitive tasks, as measured by the Primate Cognitive Test Battery. These findings suggest that the neural constructs underlying imitation recognition are likely associated with or among those underlying more general socio-communicative abilities in chimpanzees. Implications regarding how imitation recognition may facilitate other social cognitive processes, such as mirror self-recognition, are discussed.
Pope, Sarah M.; Russell, Jamie L.; Hopkins, William D.
2015-01-01
Imitation recognition provides a viable platform from which advanced social cognitive skills may develop. Despite evidence that non-human primates are capable of imitation recognition, how this ability is related to social cognitive skills is unknown. In this study, we compared imitation recognition performance, as indicated by the production of testing behaviors, with performance on a series of tasks that assess social and physical cognition in 49 chimpanzees. In the initial analyses, we found that males were more responsive than females to being imitated and engaged in significantly greater behavior repetitions and testing sequences. We also found that subjects who consistently recognized being imitated performed better on social but not physical cognitive tasks, as measured by the Primate Cognitive Test Battery. These findings suggest that the neural constructs underlying imitation recognition are likely associated with or among those underlying more general socio-communicative abilities in chimpanzees. Implications regarding how imitation recognition may facilitate other social cognitive processes, such as mirror self-recognition, are discussed. PMID:25767454
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
2015-07-01
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bidirectional Modulation of Recognition Memory
Ho, Jonathan W.; Poeta, Devon L.; Jacobson, Tara K.; Zolnik, Timothy A.; Neske, Garrett T.; Connors, Barry W.
2015-01-01
Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30–40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30–40 Hz was not effective in increasing exploration of novel images. Stimulation at 10–15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally. SIGNIFICANCE STATEMENT Recognition of novelty and familiarity are important for learning, memory, and decision making. Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects, but how novelty and familiarity are encoded and transmitted in the brain is not known. Perirhinal neurons respond to novelty and familiarity by changing firing rates, but recent work suggests that brain oscillations may also be important for recognition. In this study, we showed that stimulation of the PER could increase or decrease exploration of novel and familiar images depending on the frequency of stimulation. Our findings suggest that optical stimulation of PER at specific frequencies can predictably alter recognition memory. PMID:26424881
Moore, Kimberly Sena; Peterson, David A; O'Shea, Geoffrey; McIntosh, Gerald C; Thaut, Michael H
2008-01-01
Research shows that people with multiple sclerosis exhibit learning and memory difficulties and that music can be used successfully as a mnemonic device to aid in learning and memory. However, there is currently no research investigating the effectiveness of music mnemonics as a compensatory learning strategy for people with multiple sclerosis. Participants with clinically definitive multiple sclerosis (N = 38) were given a verbal learning and memory test. Results from a recognition memory task were analyzed that compared learning through music (n = 20) versus learning through speech (n = 18). Preliminary baseline neuropsychological data were collected that measured executive functioning skills, learning and memory abilities, sustained attention, and level of disability. An independent samples t test showed no significant difference between groups on baseline neuropsychological functioning or on recognition task measures. Correlation analyses suggest that music mnemonics may facilitate learning for people who are less impaired by the disease. Implications for future research are discussed.
The association between PTSD and facial affect recognition.
Williams, Christian L; Milanak, Melissa E; Judah, Matt R; Berenbaum, Howard
2018-05-05
The major aims of this study were to examine how, if at all, having higher levels of PTSD would be associated with performance on a facial affect recognition task in which facial expressions of emotion are superimposed on emotionally valenced, non-face images. College students with trauma histories (N = 90) completed a facial affect recognition task as well as measures of exposure to traumatic events, and PTSD symptoms. When the face and context matched, participants with higher levels of PTSD were significantly more accurate. When the face and context were mismatched, participants with lower levels of PTSD were more accurate than were those with higher levels of PTSD. These findings suggest that PTSD is associated with how people process affective information. Furthermore, these results suggest that the enhanced attention of people with higher levels of PTSD to affective information can be either beneficial or detrimental to their ability to accurately identify facial expressions of emotion. Limitations, future directions and clinical implications are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Classification of time-series images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Hatami, Nima; Gavet, Yann; Debayle, Johan
2018-04-01
Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.
Hills, Peter J; Hill, Dominic M
2017-07-12
Sad individuals perform more accurately at face identity recognition (Hills, Werno, & Lewis, 2011), possibly because they scan more of the face during encoding. During expression identification tasks, sad individuals do not fixate on the eyes as much as happier individuals (Wu, Pu, Allen, & Pauli, 2012). Fixating on features other than the eyes leads to a reduced own-ethnicity bias (Hills & Lewis, 2006). This background indicates that sad individuals would not view the eyes as much as happy individuals and this would result in improved expression recognition and a reduced own-ethnicity bias. This prediction was tested using an expression identification task, with eye tracking. We demonstrate that sad-induced participants show enhanced expression recognition and a reduced own-ethnicity bias than happy-induced participants due to scanning more facial features. We conclude that mood affects eye movements and face encoding by causing a wider sampling strategy and deeper encoding of facial features diagnostic for expression identification.
Music to my ears: Age-related decline in musical and facial emotion recognition.
Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted
2017-12-01
We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Speech and gesture interfaces for squad-level human-robot teaming
NASA Astrophysics Data System (ADS)
Harris, Jonathan; Barber, Daniel
2014-06-01
As the military increasingly adopts semi-autonomous unmanned systems for military operations, utilizing redundant and intuitive interfaces for communication between Soldiers and robots is vital to mission success. Currently, Soldiers use a common lexicon to verbally and visually communicate maneuvers between teammates. In order for robots to be seamlessly integrated within mixed-initiative teams, they must be able to understand this lexicon. Recent innovations in gaming platforms have led to advancements in speech and gesture recognition technologies, but the reliability of these technologies for enabling communication in human robot teaming is unclear. The purpose for the present study is to investigate the performance of Commercial-Off-The-Shelf (COTS) speech and gesture recognition tools in classifying a Squad Level Vocabulary (SLV) for a spatial navigation reconnaissance and surveillance task. The SLV for this study was based on findings from a survey conducted with Soldiers at Fort Benning, GA. The items of the survey focused on the communication between the Soldier and the robot, specifically in regards to verbally instructing them to execute reconnaissance and surveillance tasks. Resulting commands, identified from the survey, were then converted to equivalent arm and hand gestures, leveraging existing visual signals (e.g. U.S. Army Field Manual for Visual Signaling). A study was then run to test the ability of commercially available automated speech recognition technologies and a gesture recognition glove to classify these commands in a simulated intelligence, surveillance, and reconnaissance task. This paper presents classification accuracy of these devices for both speech and gesture modalities independently.
Eliyahu, Ilan; Luria, Roy; Hareuveny, Ronen; Margaliot, Menachem; Meiran, Nachshon; Shani, Gad
2006-02-01
The present study examined the effects of exposure to Electromagnetic Radiation emitted by a standard GSM phone at 890 MHz on human cognitive functions. This study attempted to establish a connection between the exposure of a specific area of the brain and the cognitive functions associated with that area. A total of 36 healthy right-handed male subjects performed four distinct cognitive tasks: spatial item recognition, verbal item recognition, and two spatial compatibility tasks. Tasks were chosen according to the brain side they are assumed to activate. All subjects performed the tasks under three exposure conditions: right side, left side, and sham exposure. The phones were controlled by a base station simulator and operated at their full power. We have recorded the reaction times (RTs) and accuracy of the responses. The experiments consisted of two sections, of 1 h each, with a 5 min break in between. The tasks and the exposure regimes were counterbalanced. The results indicated that the exposure of the left side of the brain slows down the left-hand response time, in the second-later-part of the experiment. This effect was apparent in three of the four tasks, and was highly significant in only one of the tests. The exposure intensity and its duration exceeded the common exposure of cellular phone users.
Interfering with memory for faces: The cost of doing two things at once.
Wammes, Jeffrey D; Fernandes, Myra A
2016-01-01
We inferred the processes critical for episodic retrieval of faces by measuring susceptibility to memory interference from different distracting tasks. Experiment 1 examined recognition of studied faces under full attention (FA) or each of two divided attention (DA) conditions requiring concurrent decisions to auditorily presented letters. Memory was disrupted in both DA relative to FA conditions, a result contrary to a material-specific account of interference effects. Experiment 2 investigated whether the magnitude of interference depended on competition between concurrent tasks for common processing resources. Studied faces were presented either upright (configurally processed) or inverted (featurally processed). Recognition was completed under FA, or DA with one of two face-based distracting tasks requiring either featural or configural processing. We found an interaction: memory for upright faces was lower under DA when the distracting task required configural than featural processing, while the reverse was true for memory of inverted faces. Across experiments, the magnitude of memory interference was similar (a 19% or 20% decline from FA) regardless of whether the materials in the distracting task overlapped with the to-be-remembered information. Importantly, interference was significantly larger (42%) when the processing demands of the distracting and target retrieval task overlapped, suggesting a processing-specific account of memory interference.
ERIC Educational Resources Information Center
Rojahn, Johannes; And Others
1995-01-01
This literature review discusses 21 studies on facial emotion recognition by persons with mental retardation in terms of methodological characteristics, stimulus material, salient variables and their relation to recognition tasks, and emotion recognition deficits in mental retardation. A table provides comparative data on all 21 studies. (DB)
Thermal-to-visible face recognition using partial least squares.
Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson
2015-03-01
Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.
Visual memory in unilateral spatial neglect: immediate recall versus delayed recognition.
Moreh, Elior; Malkinson, Tal Seidel; Zohary, Ehud; Soroker, Nachum
2014-09-01
Patients with unilateral spatial neglect (USN) often show impaired performance in spatial working memory tasks, apart from the difficulty retrieving "left-sided" spatial data from long-term memory, shown in the "piazza effect" by Bisiach and colleagues. This study's aim was to compare the effect of the spatial position of a visual object on immediate and delayed memory performance in USN patients. Specifically, immediate verbal recall performance, tested using a simultaneous presentation of four visual objects in four quadrants, was compared with memory in a later-provided recognition task, in which objects were individually shown at the screen center. Unlike healthy controls, USN patients showed a left-side disadvantage and a vertical bias in the immediate free recall task (69% vs. 42% recall for right- and left-sided objects, respectively). In the recognition task, the patients correctly recognized half of "old" items, and their correct rejection rate was 95.5%. Importantly, when the analysis focused on previously recalled items (in the immediate task), no statistically significant difference was found in the delayed recognition of objects according to their original quadrant of presentation. Furthermore, USN patients were able to recollect the correct original location of the recognized objects in 60% of the cases, well beyond chance level. This suggests that the memory trace formed in these cases was not only semantic but also contained a visuospatial tag. Finally, successful recognition of objects missed in recall trials points to formation of memory traces for neglected contralesional objects, which may become accessible to retrieval processes in explicit memory.
Inattentional blindness for ignored words: comparison of explicit and implicit memory tasks.
Butler, Beverly C; Klein, Raymond
2009-09-01
Inattentional blindness is described as the failure to perceive a supra-threshold stimulus when attention is directed away from that stimulus. Based on performance on an explicit recognition memory test and concurrent functional imaging data Rees, Russell, Frith, and Driver [Rees, G., Russell, C., Frith, C. D., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored words. Science, 286, 2504-2507] reported inattentional blindness for word stimuli that were fixated but ignored. The present study examined both explicit and implicit memory for fixated but ignored words using a selective-attention task in which overlapping picture/word stimuli were presented at fixation. No explicit awareness of the unattended words was apparent on a recognition memory test. Analysis of an implicit memory task, however, indicated that unattended words were perceived at a perceptual level. Thus, the selective-attention task did not result in perfect filtering as suggested by Rees et al. While there was no evidence of conscious perception, subjects were not blind to the implicit perceptual properties of fixated but ignored words.
Froger, Charlotte; Taconnat, Laurence; Landré, Lionel; Beigneux, Katia; Isingrini, Michel
2009-04-01
A total of 16 young (M = 27.25 years), 13 healthy elderly (M = 75.38 years), and 10 older adults with probable mild cognitive impairment (MCI; M = 78.6 years) carried out a task under two different encoding conditions (shallow vs. semantic) and two retrieval conditions (free recall vs. recognition). For the shallow condition, participants had to decide whether the first or last letter of each word in a list was "E." For the semantic condition, they had to decide whether each word represented a concrete or abstract entity. The MCI group was only able to benefit from semantic encoding to the same extent as the healthy older adults in the recognition task, whereas the younger and healthy older adults benefited in both retrieval tasks. These results suggest that the MCI group required cognitive support at retrieval to make effective use of semantic processing carried out at encoding. In the discussion, we suggest that adults with MCI engage more in deep processing, using the semantic network, than hitherto thought.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Effects of Steady-State Noise on Verbal Working Memory in Young Adults
Alt, Mary; DeDe, Gayle; Olson, Sarah; Shehorn, James
2015-01-01
Purpose We set out to examine the impact of perceptual, linguistic, and capacity demands on performance of verbal working-memory tasks. The Ease of Language Understanding model (Rönnberg et al., 2013) provides a framework for testing the dynamics of these interactions within the auditory-cognitive system. Methods Adult native speakers of English (n = 45) participated in verbal working-memory tasks requiring processing and storage of words involving different linguistic demands (closed/open set). Capacity demand ranged from 2 to 7 words per trial. Participants performed the tasks in quiet and in speech-spectrum-shaped noise. Separate groups of participants were tested at different signal-to-noise ratios. Word-recognition measures were obtained to determine effects of noise on intelligibility. Results Contrary to predictions, steady-state noise did not have an adverse effect on working-memory performance in every situation. Noise negatively influenced performance for the task with high linguistic demand. Of particular importance is the finding that the adverse effects of background noise were not confined to conditions involving declines in recognition. Conclusions Perceptual, linguistic, and cognitive demands can dynamically affect verbal working-memory performance even in a population of healthy young adults. Results suggest that researchers and clinicians need to carefully analyze task demands to understand the independent and combined auditory-cognitive factors governing performance in everyday listening situations. PMID:26384291
Yamaguchi, Motonori; Randle, James M; Wilson, Thomas L; Logan, Gordon D
2017-09-01
Hierarchical control of skilled performance depends on chunking of several lower-level units into a single higher-level unit. The present study examined the relationship between chunking and recognition of trained materials in the context of typewriting. In 3 experiments, participants were trained with typing nonwords and were later tested on their recognition of the trained materials. In Experiment 1, participants typed the same words or nonwords in 5 consecutive trials while performing a concurrent memory task. In Experiment 2, participants typed the materials with lags between repetitions without a concurrent memory task. In both experiments, recognition of typing materials was associated with better chunking of the materials. Experiment 3 used the remember-know procedure to test the recollection and familiarity components of recognition. Remember judgments were associated with better chunking than know judgments or nonrecognition. These results indicate that chunking is associated with explicit recollection of prior typing episodes. The relevance of the existing memory models to chunking in typewriting was considered, and it is proposed that memory chunking improves retrieval of trained typing materials by integrating contextual cues into the memory traces. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Do pattern recognition skills transfer across sports? A preliminary analysis.
Smeeton, Nicholas J; Ward, Paul; Williams, A Mark
2004-02-01
The ability to recognize patterns of play is fundamental to performance in team sports. While typically assumed to be domain-specific, pattern recognition skills may transfer from one sport to another if similarities exist in the perceptual features and their relations and/or the strategies used to encode and retrieve relevant information. A transfer paradigm was employed to compare skilled and less skilled soccer, field hockey and volleyball players' pattern recognition skills. Participants viewed structured and unstructured action sequences from each sport, half of which were randomly represented with clips not previously seen. The task was to identify previously viewed action sequences quickly and accurately. Transfer of pattern recognition skill was dependent on the participant's skill, sport practised, nature of the task and degree of structure. The skilled soccer and hockey players were quicker than the skilled volleyball players at recognizing structured soccer and hockey action sequences. Performance differences were not observed on the structured volleyball trials between the skilled soccer, field hockey and volleyball players. The skilled field hockey and soccer players were able to transfer perceptual information or strategies between their respective sports. The less skilled participants' results were less clear. Implications for domain-specific expertise, transfer and diversity across domains are discussed.
Sunday, Mackenzie A; Richler, Jennifer J; Gauthier, Isabel
2017-07-01
The part-whole paradigm was one of the first measures of holistic processing and it has been used to address several topics in face recognition, including its development, other-race effects, and more recently, whether holistic processing is correlated with face recognition ability. However the task was not designed to measure individual differences and it has produced measurements with low reliability. We created a new holistic processing test designed to measure individual differences based on the part-whole paradigm, the Vanderbilt Part Whole Test (VPWT). Measurements in the part and whole conditions were reliable, but, surprisingly, there was no evidence for reliable individual differences in the part-whole index (how well a person can take advantage of a face part presented within a whole face context compared to the part presented without a whole face) because part and whole conditions were strongly correlated. The same result was obtained in a version of the original part-whole task that was modified to increase its reliability. Controlling for object recognition ability, we found that variance in the whole condition does not predict any additional variance in face recognition over what is already predicted by performance in the part condition.
Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria
2016-01-01
Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884
Chen, J Y C; Terrence, P I
2008-08-01
This study examined the concurrent performance of military gunnery, robotics control and communication tasks in a simulated environment. More specifically, the study investigated how aided target recognition (AiTR) capabilities (delivered either through tactile or tactile + visual cueing) for the gunnery task might benefit overall performance. Results showed that AiTR benefited not only the gunnery task, but also the concurrent robotics and communication tasks. The participants' spatial ability was found to be a good indicator of their gunnery and robotics task performance. However, when AiTR was available to assist their gunnery task, those participants of lower spatial ability were able to perform their robotics tasks as well as those of higher spatial ability. Finally, participants' workload assessment was significantly higher when they teleoperated (i.e. remotely operated) a robot and when their gunnery task was unassisted. These results will further understanding of multitasking performance in military tasking environments. These results will also facilitate the implementation of robots in military settings and will provide useful data to military system designs.
Recognition and Posing of Emotional Expressions by Abused Children and Their Mothers.
ERIC Educational Resources Information Center
Camras, Linda A.; And Others
1988-01-01
A total of 20 abused and 20 nonabused pairs of children of three-seven years and their mothers participated in a facial expression posing task and a facial expression recognition task. Findings suggest that abused children may not observe as often as nonabused children do the easily interpreted voluntary displays of emotion by their mothers. (RH)
ERIC Educational Resources Information Center
Moldovan, Cornelia D.; Sanchez-Casas, Rosa; Demestre, Josep; Ferre, Pilar
2012-01-01
Previous evidence has shown that word pairs that are either related in form (e.g., "ruc-berro"; donkey-watercress) or very closely semantically related (e.g., "ruc-caballo", donkey-horse) produce interference effects in a translation recognition task (Ferre et al., 2006; Guasch et al., 2008). However, these effects are not…
ERIC Educational Resources Information Center
Kambara, Toshimune; Tsukiura, Takashi; Shigemune, Yayoi; Kanno, Akitake; Nouchi, Rui; Yomogida, Yukihito; Kawashima, Ryuta
2013-01-01
This study examined behavioral changes in 15-day learning of word-picture (WP) and word-sound (WS) associations, using meaningless stimuli. Subjects performed a learning task and two recognition tasks under the WP and WS conditions every day for 15 days. Two main findings emerged from this study. First, behavioral data of recognition accuracy and…
ERIC Educational Resources Information Center
Cowan, Nelson; Saults, J. Scott
2013-01-01
It is often proposed that individuals with high working memory span overcome proactive interference (PI) from previous trials, saving working memory for task-relevant items. We examined this hypothesis in word-list probe recognition. We found no difference in PI related to span. Instead, ex-Gaussian analysis of reaction time showed speed…
ERIC Educational Resources Information Center
Yang, Mu; Lewis, Freeman C.; Sarvi, Michael S.; Foley, Gillian M.; Crawley, Jacqueline N.
2015-01-01
Chromosomal 16p11.2 deletion syndrome frequently presents with intellectual disabilities, speech delays, and autism. Here we investigated the Dolmetsch line of 16p11.2 heterozygous (+/-) mice on a range of cognitive tasks with different neuroanatomical substrates. Robust novel object recognition deficits were replicated in two cohorts of 16p11.2…
Susceptibility to false memories in patients with ACoA aneurysm.
Borsutzky, Sabine; Fujiwara, Esther; Brand, Matthias; Markowitsch, Hans J
2010-08-01
We examined ACoA patients regarding their susceptibility to a range of false memory phenomena. We targeted provoked confabulation, false recall and false recognition in the Deese-Roediger-McDermott-paradigm (DRM-paradigm) as well as false recognition in a mirror reading task. ACoA patients produced more provoked confabulations and more false recognition in mirror reading than comparison subjects. Conversely, false recall/false recognition in the DRM-paradigm were similar in patients and controls. Whereas the former two indices of false memories were correlated, no relationship was revealed with the DRM-paradigm. Our results suggest that rupture of ACoA aneurysm leads to an increased susceptibility to a subset of false memories types. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
The role of skin colour in face recognition.
Bar-Haim, Yair; Saidel, Talia; Yovel, Galit
2009-01-01
People have better memory for faces from their own racial group than for faces from other races. It has been suggested that this own-race recognition advantage depends on an initial categorisation of faces into own and other race based on racial markers, resulting in poorer encoding of individual variations in other-race faces. Here, we used a study--test recognition task with stimuli in which the skin colour of African and Caucasian faces was manipulated to produce four categories representing the cross-section between skin colour and facial features. We show that, despite the notion that skin colour plays a major role in categorising faces into own and other-race faces, its effect on face recognition is minor relative to differences across races in facial features.
Process dissociation between contextual retrieval and item recognition.
Weis, Susanne; Specht, Karsten; Klaver, Peter; Tendolkar, Indira; Willmes, Klaus; Ruhlmann, Jürgen; Elger, Christian E; Fernández, Guillén
2004-12-22
We employed a source memory task in an event related fMRI study to dissociate MTL processes associated with either contextual retrieval or item recognition. To introduce context during study, stimuli (photographs of buildings and natural landscapes) were transformed into one of four single-color-scales: red, blue, yellow, or green. In the subsequent old/new recognition memory test, all stimuli were presented as gray scale photographs, and old-responses were followed by a four-alternative source judgment referring to the color in which the stimulus was presented during study. Our results suggest a clear-cut process dissociation within the human MTL. While an activity increase accompanies successful retrieval of contextual information, an activity decrease provides a familiarity signal that is sufficient for successful item recognition.
Satterthwaite, Theodore D.; Wolf, Daniel H.; Loughead, James; Ruparel, Kosha; Valdez, Jeffrey N.; Siegel, Steven J.; Kohler, Christian G.; Gur, Raquel E.; Gur, Ruben C.
2014-01-01
Objective Recognition memory of faces is impaired in patients with schizophrenia, as is the neural processing of threat-related signals, but how these deficits interact to produce symptoms is unclear. Here we used an affective face recognition paradigm to examine possible interactions between cognitive and affective neural systems in schizophrenia. Methods fMRI (3T) BOLD response was examined in 21 controls and 16 patients during a two-choice recognition task using images of human faces. Each target face had previously been displayed with a threatening or non-threatening affect, but here were displayed with neutral affect. Responses to successful recognition and for the effect of previously threatening vs. non-threatening affect were evaluated, and correlations with total BPRS examined. Functional connectivity analyses examined the relationship between activation in the amygdala and cortical regions involved in recognition memory. Results Patients performed the task more slowly than controls. Controls recruited the expected cortical regions to a greater degree than patients, and patients with more severe symptoms demonstrated proportionally less recruitment. Increased symptoms were also correlated with augmented amygdala and orbitofrontal cortex response to threatening faces. Controls exhibited a negative correlation between activity in the amygdala and cortical regions involved in cognition, while patients showed a weakening of that relationship. Conclusions Increased symptoms were related to an enhanced threat response in limbic regions and a diminished recognition memory response in cortical regions, supporting a link between two brain systems often examined in isolation. This finding suggests that abnormal processing of threat-related signals in the environment may exacerbate cognitive impairment in schizophrenia. PMID:20194482
Intact anger recognition in depression despite aberrant visual facial information usage.
Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M
2014-08-01
Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.
Palmer, Daniel; Creighton, Samantha; Prado, Vania F; Prado, Marco A M; Choleris, Elena; Winters, Boyer D
2016-09-15
Substantial evidence implicates Acetylcholine (ACh) in the acquisition of object memories. While most research has focused on the role of the cholinergic basal forebrain and its cortical targets, there are additional cholinergic networks that may contribute to object recognition. The striatum contains an independent cholinergic network comprised of interneurons. In the current study, we investigated the role of this cholinergic signalling in object recognition using mice deficient for Vesicular Acetylcholine Transporter (VAChT) within interneurons of the striatum. We tested whether these striatal VAChT(D2-Cre-flox/flox) mice would display normal short-term (5 or 15min retention delay) and long-term (3h retention delay) object recognition memory. In a home cage object recognition task, male and female VAChT(D2-Cre-flox/flox) mice were impaired selectively with a 15min retention delay. When tested on an object location task, VAChT(D2-Cre-flox/flox) mice displayed intact spatial memory. Finally, when object recognition was tested in a Y-shaped apparatus, designed to minimize the influence of spatial and contextual cues, only females displayed impaired recognition with a 5min retention delay, but when males were challenged with a 15min retention delay, they were also impaired; neither males nor females were impaired with the 3h delay. The pattern of results suggests that striatal cholinergic transmission plays a role in the short-term memory for object features, but not spatial location. Copyright © 2016 Elsevier B.V. All rights reserved.
Villain, Hélène; Benkahoul, Aïcha; Drougard, Anne; Lafragette, Marie; Muzotte, Elodie; Pech, Stéphane; Bui, Eric; Brunet, Alain; Birmes, Philippe; Roullet, Pascal
2016-01-01
Memory reconsolidation impairment using the β-noradrenergic receptor blocker propranolol is a promising novel treatment avenue for patients suffering from pathogenic memories, such as post-traumatic stress disorder (PTSD). However, in order to better inform targeted treatment development, the effects of this compound on memory need to be better characterized via translational research. We examined the effects of systemic propranolol administration in mice undergoing a wide range of behavioral tests to determine more specifically which aspects of the memory consolidation and reconsolidation are impaired by propranolol. We found that propranolol (10 mg/kg) affected memory consolidation in non-aversive tasks (object recognition and object location) but not in moderately (Morris water maze (MWM) to highly (passive avoidance, conditioned taste aversion) aversive tasks. Further, propranolol impaired memory reconsolidation in the most and in the least aversive tasks, but not in the moderately aversive task, suggesting its amnesic effect was not related to task aversion. Moreover, in aquatic object recognition and location tasks in which animals were forced to behave (contrary to the classic versions of the tasks); propranolol did not impair memory reconsolidation. Taken together our results suggest that the memory impairment observed after propranolol administration may result from a modification of the emotional valence of the memory rather than a disruption of the contextual component of the memory trace. This is relevant to the use of propranolol to block memory reconsolidation in individuals with PTSD, as such a treatment would not erase the traumatic memory but only reduce the emotional valence associated with this event. PMID:27014009
Limbrecht-Ecklundt, Kerstin; Scheck, Andreas; Jerg-Bretzke, Lucia; Walter, Steffen; Hoffmann, Holger; Traue, Harald C.
2013-01-01
Objective: This article includes the examination of potential methodological problems of the application of a forced choice response format in facial emotion recognition. Methodology: 33 subjects were presented with validated facial stimuli. The task was to make a decision about which emotion was shown. In addition, the subjective certainty concerning the decision was recorded. Results: The detection rates are 68% for fear, 81% for sadness, 85% for anger, 87% for surprise, 88% for disgust, and 94% for happiness, and are thus well above the random probability. Conclusion: This study refutes the concern that the use of forced choice formats may not adequately reflect actual recognition performance. The use of standardized tests to examine emotion recognition ability leads to valid results and can be used in different contexts. For example, the images presented here appear suitable for diagnosing deficits in emotion recognition in the context of psychological disorders and for mapping treatment progress. PMID:23798981
Normal mere exposure effect with impaired recognition in Alzheimer's disease.
Willems, Sylvie; Adam, Stéphane; Van der Linden, Martial
2002-02-01
We investigated the mere exposure effect and the explicit memory in Alzheimer's disease (AD) patients and elderly control subjects, using unfamiliar faces. During the exposure phase, the subjects estimated the age of briefly flashed faces. The mere exposure effect was examined by presenting pairs of faces (old and new) and asking participants to select the face they liked. The participants were then presented with a forced-choice explicit recognition task. Controls subjects exhibited above-chance preference and recognition scores for old faces. The AD patients also showed the mere exposure effect but no explicit recognition. These results suggest that the processes involved in the mere exposure effect are preserved in AD patients despite their impaired explicit recognition. The results are discussed in terms of Seamon et al.'s (1995) proposal that processes involved in the mere exposure effect are equivalent to those subserving perceptual priming. These processes would depend on extrastriate areas which are relatively preserved in AD patients.
Raymond, Jane E; O'Brien, Jennifer L
2009-08-01
Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.
Face recognition and description abilities in people with mild intellectual disabilities.
Gawrylowicz, Julie; Gabbert, Fiona; Carson, Derek; Lindsay, William R; Hancock, Peter J B
2013-09-01
People with intellectual disabilities (ID) are as likely as the general population to find themselves in the situation of having to identify and/or describe a perpetrator's face to the police. However, limited verbal and memory abilities in people with ID might prevent them to engage in standard police procedures. Two experiments examined face recognition and description abilities in people with mild intellectual disabilities (mID) and compared their performance with that of people without ID. Experiment 1 used three old/new face recognition tasks. Experiment 2 consisted of two face description tasks, during which participants had to verbally describe faces from memory and with the target in view. Participants with mID performed significantly poorer on both recognition and recall tasks than control participants. However, their group performance was better than chance and they showed variability in performance depending on the measures introduced. The practical implications of these findings in forensic settings are discussed. © 2013 John Wiley & Sons Ltd.
Impaired recognition of body expressions in the behavioral variant of frontotemporal dementia.
Van den Stock, Jan; De Winter, François-Laurent; de Gelder, Beatrice; Rangarajan, Janaki Raman; Cypers, Gert; Maes, Frederik; Sunaert, Stefan; Goffin, Karolien; Vandenberghe, Rik; Vandenbulcke, Mathieu
2015-08-01
Progressive deterioration of social cognition and emotion processing are core symptoms of the behavioral variant of frontotemporal dementia (bvFTD). Here we investigate whether bvFTD is also associated with impaired recognition of static (Experiment 1) and dynamic (Experiment 2) bodily expressions. In addition, we compared body expression processing with processing of static (Experiment 3) and dynamic (Experiment 4) facial expressions, as well as with face identity processing (Experiment 5). The results reveal that bvFTD is associated with impaired recognition of static and dynamic bodily and facial expressions, while identity processing was intact. No differential impairments were observed regarding motion (static vs. dynamic) or category (body vs. face). Within the bvFTD group, we observed a significant partial correlation between body and face expression recognition, when controlling for performance on the identity task. Voxel-Based Morphometry (VBM) analysis revealed that body emotion recognition was positively associated with gray matter volume in a region of the inferior frontal gyrus (pars orbitalis/triangularis). The results are in line with a supramodal emotion recognition deficit in bvFTD. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Knasel, T. Michael
1996-01-01
The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.
Can anchor models explain inverted-U effects in facial judgments?
Mignault, Alain; Bhaumik, Arijit; Chaudhuri, Avi
2009-06-01
Researchers in a variety of disciplines have found that participants take less time and generate less diversity of responses when judging stimuli towards the ends of a scale than when judging those near the center. Three types of models, connectionist, exemplar, and anchor models, can account for these inverted-U effects. Anchor models assume that stimuli near the ends of the scale are used as anchors to compare with the other stimuli, implying that anchor representations are activated for each judgment. Therefore, participants should learn the anchors better than the other stimuli. Participants were 40 students from the Department of Psychology at McGill University (5 men; M age = 20.5 yr.; SD = 1.7). The experiment involved two tasks: first participants judged facial gender and then performed a recognition task. The results showed no correlation between the position on the gender scale and recognition accuracy. Several hypotheses were offered to explain these results.
I undervalue you but I need you: the dissociation of attitude and memory toward in-group members.
Zhao, Ke; Wu, Qi; Shen, Xunbing; Xuan, Yuming; Fu, Xiaolan
2012-01-01
In the present study, the in-group bias or in-group derogation among Mainland Chinese was investigated through a rating task and a recognition test. In two experiments,participants from two universities with similar ranks rated novel faces or names and then had a recognition test. Half of the faces or names were labeled as participants' own university and the other half were labeled as their counterpart. Results showed that, for either faces or names, rating scores for out-group members were consistently higher than those for in-group members, whereas the recognition accuracy showed just the opposite. These results indicated that the attitude and memory for group-relevant information might be dissociated among Mainland Chinese.
I Undervalue You but I Need You: The Dissociation of Attitude and Memory Toward In-Group Members
Zhao, Ke; Wu, Qi; Shen, Xunbing; Xuan, Yuming; Fu, Xiaolan
2012-01-01
In the present study, the in-group bias or in-group derogation among mainland Chinese was investigated through a rating task and a recognition test. In two experiments,participants from two universities with similar ranks rated novel faces or names and then had a recognition test. Half of the faces or names were labeled as participants' own university and the other half were labeled as their counterpart. Results showed that, for either faces or names, rating scores for out-group members were consistently higher than those for in-group members, whereas the recognition accuracy showed just the opposite. These results indicated that the attitude and memory for group-relevant information might be dissociated among Mainland Chinese. PMID:22412955
NASA Astrophysics Data System (ADS)
Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.
2015-01-01
The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.
Reverse control for humanoid robot task recognition.
Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul
2012-12-01
Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.
Facial Expression Influences Face Identity Recognition During the Attentional Blink
2014-01-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076
Facial expression influences face identity recognition during the attentional blink.
Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J
2014-12-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.
Mania, Katerina; Wooldridge, Dave; Coxon, Matthew; Robinson, Andrew
2006-01-01
Accuracy of memory performance per se is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in memory tasks. The aim of this research is to investigate the effect of varied visual and interaction fidelity of immersive virtual environments on memory awareness states. A between groups experiment was carried out to explore the effect of rendering quality on location-based recognition memory for objects and associated states of awareness. The experimental space, consisting of two interconnected rooms, was rendered either flat-shaded or using radiosity rendering. The computer graphics simulations were displayed on a stereo head-tracked Head Mounted Display. Participants completed a recognition memory task after exposure to the experimental space and reported one of four states of awareness following object recognition. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection, and also included guesses. Experimental results revealed variations in the distribution of participants' awareness states across conditions while memory performance failed to reveal any. Interestingly, results revealed a higher proportion of recollections associated with mental imagery in the flat-shaded condition. These findings comply with similar effects revealed in two earlier studies summarized here, which demonstrated that the less "naturalistic" interaction interface or interface of low interaction fidelity provoked a higher proportion of recognitions based on visual mental images.
Partially converted stereoscopic images and the effects on visual attention and memory
NASA Astrophysics Data System (ADS)
Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi
2015-03-01
This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct response rate in the partial 3D condition was significantly higher with the recognition task than in the other conditions. These results showed that partially converted 3D images tended to have a visual attraction and affect viewer's memory.
Enhanced tactile encoding and memory recognition in congenital blindness.
D'Angiulli, Amedeo; Waraich, Paul
2002-06-01
Several behavioural studies have shown that early-blind persons possess superior tactile skills. Since neurophysiological data show that early-blind persons recruit visual as well as somatosensory cortex to carry out tactile processing (cross-modal plasticity), blind persons' sharper tactile skills may be related to cortical re-organisation resulting from loss of vision early in their life. To examine the nature of blind individuals' tactile superiority and its implications for cross-modal plasticity, we compared the tactile performance of congenitally totally blind, low-vision and sighted children on raised-line picture identification test and re-test, assessing effects of task familiarity, exploratory strategy and memory recognition. What distinguished the blind from the other children was higher memory recognition and higher tactile encoding associated with efficient exploration. These results suggest that enhanced perceptual encoding and recognition memory may be two cognitive correlates of cross-modal plasticity in congenital blindness.
Combination of dynamic Bayesian network classifiers for the recognition of degraded characters
NASA Astrophysics Data System (ADS)
Likforman-Sulem, Laurence; Sigelle, Marc
2009-01-01
We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.
Artificial neural networks for document analysis and recognition.
Marinai, Simone; Gori, Marco; Soda, Giovanni; Society, Computer
2005-01-01
Artificial neural networks have been extensively applied to document analysis and recognition. Most efforts have been devoted to the recognition of isolated handwritten and printed characters with widely recognized successful results. However, many other document processing tasks, like preprocessing, layout analysis, character segmentation, word recognition, and signature verification, have been effectively faced with very promising results. This paper surveys the most significant problems in the area of offline document image processing, where connectionist-based approaches have been applied. Similarities and differences between approaches belonging to different categories are discussed. A particular emphasis is given on the crucial role of prior knowledge for the conception of both appropriate architectures and learning algorithms. Finally, the paper provides a critical analysis on the reviewed approaches and depicts the most promising research guidelines in the field. In particular, a second generation of connectionist-based models are foreseen which are based on appropriate graphical representations of the learning environment.
Lysaker, Paul H; Leonhardt, Bethany L; Brüne, Martin; Buck, Kelly D; James, Alison; Vohs, Jenifer; Francis, Michael; Hamm, Jay A; Salvatore, Giampaolo; Ringer, Jamie M; Dimaggio, Giancarlo
2014-09-30
While many with schizophrenia spectrum disorders experience difficulties understanding the feelings of others, little is known about the psychological antecedents of these deficits. To explore these issues we examined whether deficits in mental state decoding, mental state reasoning and metacognitive capacity predict performance on an emotion recognition task. Participants were 115 adults with a schizophrenia spectrum disorder and 58 adults with substance use disorders but no history of a diagnosis of psychosis who completed the Eyes and Hinting Test. Metacognitive capacity was assessed using the Metacognitive Assessment Scale Abbreviated and emotion recognition was assessed using the Bell Lysaker Emotion Recognition Test. Results revealed that the schizophrenia patients performed more poorly than controls on tests of emotion recognition, mental state decoding, mental state reasoning and metacognition. Lesser capacities for mental state decoding, mental state reasoning and metacognition were all uniquely related emotion recognition within the schizophrenia group even after controlling for neurocognition and symptoms in a stepwise multiple regression. Results suggest that deficits in emotion recognition in schizophrenia may partly result from a combination of impairments in the ability to judge the cognitive and affective states of others and difficulties forming complex representations of self and others. Published by Elsevier Ireland Ltd.
Evidence for modality-independent order coding in working memory.
Depoorter, Ann; Vandierendonck, André
2009-03-01
The aim of the present study was to investigate the representation of serial order in working memory, more specifically whether serial order is coded by means of a modality-dependent or a modality-independent order code. This was investigated by means of a series of four experiments based on a dual-task methodology in which one short-term memory task was embedded between the presentation and recall of another short-term memory task. Two aspects were varied in these memory tasks--namely, the modality of the stimulus materials (verbal or visuo-spatial) and the presence of an order component in the task (an order or an item memory task). The results of this study showed impaired primary-task recognition performance when both the primary and the embedded task included an order component, irrespective of the modality of the stimulus materials. If one or both of the tasks did not contain an order component, less interference was found. The results of this study support the existence of a modality-independent order code.
Age and measurement time-of-day effects on speech recognition in noise.
Veneman, Carrie E; Gordon-Salant, Sandra; Matthews, Lois J; Dubno, Judy R
2013-01-01
The purpose of this study was to determine the effect of measurement time of day on speech recognition in noise and the extent to which time-of-day effects differ with age. Older adults tend to have more difficulty understanding speech in noise than younger adults, even when hearing is normal. Two possible contributors to this age difference in speech recognition may be measurement time of day and inhibition. Most younger adults are "evening-type," showing peak circadian arousal in the evening, whereas most older adults are "morning-type," with circadian arousal peaking in the morning. Tasks that require inhibition of irrelevant information have been shown to be affected by measurement time of day, with maximum performance attained at one's peak time of day. The authors hypothesized that a change in inhibition will be associated with measurement time of day and therefore affect speech recognition in noise, with better performance in the morning for older adults and in the evening for younger adults. Fifteen younger evening-type adults (20-28 years) and 15 older morning-type adults with normal hearing (66-78 years) listened to the Hearing in Noise Test (HINT) and the Quick Speech in Noise (QuickSIN) test in the morning and evening (peak and off-peak times). Time of day preference was assessed using the Morningness-Eveningness Questionnaire. Sentences and noise were presented binaurally through insert earphones. During morning and evening sessions, participants solved word-association problems within the visual-distraction task (VDT), which was used as an estimate of inhibition. After each session, participants rated perceived mental demand of the tasks using a revised version of the NASA Task Load Index. Younger adults performed significantly better on the speech-in-noise tasks and rated themselves as requiring significantly less mental demand when tested at their peak (evening) than off-peak (morning) time of day. In contrast, time-of-day effects were not observed for the older adults on the speech recognition or rating tasks. Although older adults required significantly more advantageous signal-to-noise ratios than younger adults for equivalent speech-recognition performance, a significantly larger younger versus older age difference in speech recognition was observed in the evening than in the morning. Older adults performed significantly poorer than younger adults on the VDT, but performance was not affected by measurement time of day. VDT performance for misleading distracter items was significantly correlated with HINT and QuickSIN test performance at the peak measurement time of day. Although all participants had normal hearing, speech recognition in noise was significantly poorer for older than younger adults, with larger age-related differences in the evening (an off-peak time for older adults) than in the morning. The significant effect of measurement time of day suggests that this factor may impact the clinical assessment of speech recognition in noise for all individuals. It appears that inhibition, as estimated by a visual distraction task for misleading visual items, is a cognitive mechanism that is related to speech-recognition performance in noise, at least at a listener's peak time of day.
Audiovisual speech perception development at varying levels of perceptual processing
Lalonde, Kaylah; Holt, Rachael Frush
2016-01-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Audiovisual speech perception development at varying levels of perceptual processing.
Lalonde, Kaylah; Holt, Rachael Frush
2016-04-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Isomura, Tomoko; Ogawa, Shino; Yamada, Satoko; Shibasaki, Masahiro; Masataka, Nobuo
2014-01-01
Previous studies have demonstrated that angry faces capture humans' attention more rapidly than emotionally positive faces. This phenomenon is referred to as the anger superiority effect (ASE). Despite atypical emotional processing, adults and children with Autism Spectrum Disorders (ASD) have been reported to show ASE as well as typically developed (TD) individuals. So far, however, few studies have clarified whether or not the mechanisms underlying ASE are the same for both TD and ASD individuals. Here, we tested how TD and ASD children process schematic emotional faces during detection by employing a recognition task in combination with a face-in-the-crowd task. Results of the face-in-the-crowd task revealed the prevalence of ASE both in TD and ASD children. However, the results of the recognition task revealed group differences: In TD children, detection of angry faces required more configural face processing and disrupted the processing of local features. In ASD children, on the other hand, it required more feature-based processing rather than configural processing. Despite the small sample sizes, these findings provide preliminary evidence that children with ASD, in contrast to TD children, show quick detection of angry faces by extracting local features in faces. PMID:24904477
Face identity matching is selectively impaired in developmental prosopagnosia.
Fisher, Katie; Towler, John; Eimer, Martin
2017-04-01
Individuals with developmental prosopagnosia (DP) have severe face recognition deficits, but the mechanisms that are responsible for these deficits have not yet been fully identified. We assessed whether the activation of visual working memory for individual faces is selectively impaired in DP. Twelve DPs and twelve age-matched control participants were tested in a task where they reported whether successively presented faces showed the same or two different individuals, and another task where they judged whether the faces showed the same or different facial expressions. Repetitions versus changes of the other currently irrelevant attribute were varied independently. DPs showed impaired performance in the identity task, but performed at the same level as controls in the expression task. An electrophysiological marker for the activation of visual face memory by identity matches (N250r component) was strongly attenuated in the DP group, and the size of this attenuation was correlated with poor performance in a standardized face recognition test. Results demonstrate an identity-specific deficit of visual face memory in DPs. Their reduced sensitivity to identity matches in the presence of other image changes could result from earlier deficits in the perceptual extraction of image-invariant visual identity cues from face images. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Annett, John
An experienced person, in such tasks as sonar detection and recognition, has a considerable superiority over a machine recognition system in auditory pattern recognition. However, people require extensive exposure to auditory patterns before achieving a high level of performance. In an attempt to discover a method of training people to recognize…
The MITLL NIST LRE 2015 Language Recognition System
2016-05-06
The MITLL NIST LRE 2015 Language Recognition System Pedro Torres-Carrasquillo, Najim Dehak*, Elizabeth Godoy, Douglas Reynolds, Fred Richardson...most recent MIT Lincoln Laboratory language recognition system developed for the NIST 2015 Language Recognition Evaluation (LRE). The submission...Task The National Institute of Science and Technology ( NIST ) has conducted formal evaluations of language detection algorithms since 1994. In
NK1 receptor antagonism and emotional processing in healthy volunteers.
Chandra, P; Hafizi, S; Massey-Chase, R M; Goodwin, G M; Cowen, P J; Harmer, C J
2010-04-01
The neurokinin-1 (NK(1)) receptor antagonist, aprepitant, showed activity in several animal models of depression; however, its efficacy in clinical trials was disappointing. There is little knowledge of the role of NK(1) receptors in human emotional behaviour to help explain this discrepancy. The aim of the current study was to assess the effects of a single oral dose of aprepitant (125 mg) on models of emotional processing sensitive to conventional antidepressant drug administration in 38 healthy volunteers, randomly allocated to receive aprepitant or placebo in a between groups double blind design. Performance on measures of facial expression recognition, emotional categorisation, memory and attentional visual-probe were assessed following the drug absorption. Relative to placebo, aprepitant improved recognition of happy facial expressions and increased vigilance to emotional information in the unmasked condition of the visual probe task. In contrast, aprepitant impaired emotional memory and slowed responses in the facial expression recognition task suggesting possible deleterious effects on cognition. These results suggest that while antagonism of NK(1) receptors does affect emotional processing in humans, its effects are more restricted and less consistent across tasks than those of conventional antidepressants. Human models of emotional processing may provide a useful means of assessing the likely therapeutic potential of new treatments for depression.
Association of physical fitness and fatness with cognitive function in women with fibromyalgia.
Soriano-Maldonado, Alberto; Artero, Enrique G; Segura-Jiménez, Víctor; Aparicio, Virgina A; Estévez-López, Fernando; Álvarez-Gallardo, Inmaculada C; Munguía-Izquierdo, Diego; Casimiro-Andújar, Antonio J; Delgado-Fernández, Manuel; Ortega, Francisco B
2016-09-01
This study assessed the association of fitness and fatness with cognitive function in women with fibromyalgia, and the independent influence of their single components on cognitive tasks. A total of 468 women with fibromyalgia were included. Speed of information processing and working memory (Paced Auditory Serial Addition Task), as well as immediate and delayed recall, verbal learning and delayed recognition (Rey Auditory Verbal Learning Test) were assessed. Aerobic fitness, muscle strength, flexibility and motor agility were assessed with the Senior Fitness Test battery. Body mass index, percent body fat, fat-mass index and waist circumference were measured. Aerobic fitness was associated with attention and working memory (all, p < 0.05). All fitness components were generally associated with delayed recall, verbal learning and delayed recognition (all, p < 0.05). Aerobic fitness showed the most powerful association with attention, working memory, delayed recall and verbal learning, while motor agility was the most powerful indicator of delayed recognition. None of the fatness parameters were associated with any of the outcomes (all, p > 0.05). Our results suggest that fitness, but not fatness, is associated with cognitive function in women with fibromyalgia. Aerobic fitness appears to be the most powerful fitness component regarding the cognitive tasks evaluated.
Green, Amity E; Fitzgerald, Paul B; Johnston, Patrick J; Nathan, Pradeep J; Kulkarni, Jayashri; Croft, Rodney J
2017-08-01
Schizophrenia is characterised by significant episodic memory impairment that is thought to be related to problems with encoding, however the neuro-functional mechanisms underlying these deficits are not well understood. The present study used a subsequent recognition memory paradigm and event-related potentials (ERPs) to investigate temporal aspects of episodic memory encoding deficits in schizophrenia. Electroencephalographic data was recorded in 24 patients and 19 healthy controls whilst participants categorised single words as pleasant/unpleasant. ERPs were generated to subsequently recognised versus unrecognised words on the basis of a forced-choice recognition memory task. Subsequent memory effects were examined with the late positive component (LPP). Group differences in N1, P2, N400 and LPP were examined for words correctly recognised. Patients performed more poorly than controls on the recognition task. During encoding patients had significantly reduced N400 and LPP amplitudes than controls. LPP amplitude correlated with task performance however amplitudes did not differ between patients and controls as a function of subsequent memory. No significant differences in N1 or P2 amplitude or latency were observed. The present results indicate that early sensory processes are intact and dysfunctional higher order cognitive processes during encoding are contributing to episodic memory impairments in schizophrenia.
Is talking to an automated teller machine natural and fun?
Chan, F Y; Khalid, H M
Usability and affective issues of using automatic speech recognition technology to interact with an automated teller machine (ATM) are investigated in two experiments. The first uncovered dialogue patterns of ATM users for the purpose of designing the user interface for a simulated speech ATM system. Applying the Wizard-of-Oz methodology, multiple mapping and word spotting techniques, the speech driven ATM accommodates bilingual users of Bahasa Melayu and English. The second experiment evaluates the usability of a hybrid speech ATM, comparing it with a simulated manual ATM. The aim is to investigate how natural and fun can talking to a speech ATM be for these first-time users. Subjects performed the withdrawal and balance enquiry tasks. The ANOVA was performed on the usability and affective data. The results showed significant differences between systems in the ability to complete the tasks as well as in transaction errors. Performance was measured on the time taken by subjects to complete the task and the number of speech recognition errors that occurred. On the basis of user emotions, it can be said that the hybrid speech system enabled pleasurable interaction. Despite the limitations of speech recognition technology, users are set to talk to the ATM when it becomes available for public use.
Renoult, Louis; Davidson, Patrick S R; Schmitz, Erika; Park, Lillian; Campbell, Kenneth; Moscovitch, Morris; Levine, Brian
2015-01-01
A common assertion is that semantic memory emerges from episodic memory, shedding the distinctive contexts associated with episodes over time and/or repeated instances. Some semantic concepts, however, may retain their episodic origins or acquire episodic information during life experiences. The current study examined this hypothesis by investigating the ERP correlates of autobiographically significant (AS) concepts, that is, semantic concepts that are associated with vivid episodic memories. We inferred the contribution of semantic and episodic memory to AS concepts using the amplitudes of the N400 and late positive component, respectively. We compared famous names that easily brought to mind episodic memories (high AS names) against equally famous names that did not bring such recollections to mind (low AS names) on a semantic task (fame judgment) and an episodic task (recognition memory). Compared with low AS names, high AS names were associated with increased amplitude of the late positive component in both tasks. Moreover, in the recognition task, this effect of AS was highly correlated with recognition confidence. In contrast, the N400 component did not differentiate the high versus low AS names but, instead, was related to the amount of general knowledge participants had regarding each name. These results suggest that semantic concepts high in AS, such as famous names, have an episodic component and are associated with similar brain processes to those that are engaged by episodic memory. Studying AS concepts may provide unique insights into how episodic and semantic memory interact.
Social enrichment improves social recognition memory in male rats.
Toyoshima, Michimasa; Yamada, Kazuo; Sugita, Manami; Ichitani, Yukio
2018-05-01
The social environment is thought to have a strong impact on cognitive functions. In the present study, we investigated whether social enrichment could affect rats' memory ability using the "Different Objects Task (DOT)," in which the levels of memory load could be modulated by changing the number of objects to be remembered. In addition, we applied the DOT to a social discrimination task using unfamiliar conspecific juveniles instead of objects. Animals were housed in one of the three different housing conditions after weaning [postnatal day (PND) 21]: social-separated (1 per cage), standard (3 per cage), or social-enriched (10 per cage) conditions. The object and social recognition tasks were conducted on PND 60. In the sample phase, the rats were allowed to explore a field in which 3, 4, or 5 different, unfamiliar stimuli (conspecific juveniles through a mesh or objects) were presented. In the test phase conducted after a 5-min delay, social-separated rats were able to discriminate the novel conspecific from the familiar ones only under the condition in which three different conspecifics were presented; social-enriched rats managed to recognize the novel conspecific even under the condition of five different conspecifics. On the other hand, in the object recognition task, both social-separated and social-enriched rats were able to discriminate the novel object from the familiar ones under the condition of five different objects. These results suggest that social enrichment can enhance social, but not object, memory span.
Batterink, Laura; Neville, Helen
2011-11-01
The vast majority of word meanings are learned simply by extracting them from context rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M-). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M- words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M- words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time window compared with M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, whereas implicit representations may require more extensive exposure or more time to emerge.
Palmer, Clare E; Langbehn, Douglas; Tabrizi, Sarah J; Papoutsi, Marina
2017-01-01
Cognitive impairment is common amongst many neurodegenerative movement disorders such as Huntington's disease (HD) and Parkinson's disease (PD) across multiple domains. There are many tasks available to assess different aspects of this dysfunction, however, it is imperative that these show high test-retest reliability if they are to be used to track disease progression or response to treatment in patient populations. Moreover, in order to ensure effects of practice across testing sessions are not misconstrued as clinical improvement in clinical trials, tasks which are particularly vulnerable to practice effects need to be highlighted. In this study we evaluated test-retest reliability in mean performance across three testing sessions of four tasks that are commonly used to measure cognitive dysfunction associated with striatal impairment: a combined Simon Stop-Signal Task; a modified emotion recognition task; a circle tracing task; and the trail making task. Practice effects were seen between sessions 1 and 2 across all tasks for the majority of dependent variables, particularly reaction time variables; some, but not all, diminished in the third session. Good test-retest reliability across all sessions was seen for the emotion recognition, circle tracing, and trail making test. The Simon interference effect and stop-signal reaction time (SSRT) from the combined-Simon-Stop-Signal task showed moderate test-retest reliability, however, the combined SSRT interference effect showed poor test-retest reliability. Our results emphasize the need to use control groups when tracking clinical progression or use pre-baseline training on tasks susceptible to practice effects.
Social Cognition Psychometric Evaluation: Results of the Final Validation Study.
Pinkham, Amy E; Harvey, Philip D; Penn, David L
2018-06-06
Social cognition is increasingly recognized as an important treatment target in schizophrenia; however, the dearth of well-validated measures that are suitable for use in clinical trials remains a significant limitation. The Social Cognition Psychometric Evaluation (SCOPE) study addresses this need by systematically evaluating the psychometric properties of promising measures. In this final phase of SCOPE, eight new or modified tasks were evaluated. Stable outpatients with schizophrenia (n = 218) and healthy controls (n = 154) completed the battery at baseline and 2-4 weeks later across three sites. Tasks included the Bell Lysaker Emotion Recognition Task (BLERT), Penn Emotion Recognition Task (ER-40), Reading the Mind in the Eyes Task (Eyes), The Awareness of Social Inferences Test (TASIT), Hinting Task, Mini Profile of Nonverbal Sensitivity (MiniPONS), Social Attribution Task-Multiple Choice (SAT-MC), and Intentionality Bias Task (IBT). BLERT and ER-40 modifications included response time and confidence ratings. The Eyes task was modified to include definitions of terms and TASIT to include response time. Hinting was scored with more stringent criteria. MiniPONS, SAT-MC, and IBT were new to this phase. Tasks were evaluated on (1) test-retest reliability, (2) utility as a repeated measure, (3) relationship to functional outcome, (4) practicality and tolerability, (5) sensitivity to group differences, and (6) internal consistency. Hinting, BLERT, and ER-40 showed the strongest psychometric properties and are recommended for use in clinical trials. Eyes, TASIT, and IBT showed somewhat weaker psychometric properties and require further study. MiniPONS and SAT-MC showed poorer psychometric properties that suggest caution for their use in clinical trials.