Sample records for facial recognition memory

  1. Neuroanatomical substrates involved in unrelated false facial recognition.

    PubMed

    Ronzon-Gonzalez, Eliane; Hernandez-Castillo, Carlos R; Pasaye, Erick H; Vaca-Palomares, Israel; Fernandez-Ruiz, Juan

    2017-11-22

    Identifying faces is a process central for social interaction and a relevant factor in eyewitness theory. False recognition is a critical mistake during an eyewitness's identification scenario because it can lead to a wrongful conviction. Previous studies have described neural areas related to false facial recognition using the standard Deese/Roediger-McDermott (DRM) paradigm, triggering related false recognition. Nonetheless, misidentification of faces without trying to elicit false memories (unrelated false recognition) in a police lineup could involve different cognitive processes, and distinct neural areas. To delve into the neural circuitry of unrelated false recognition, we evaluated the memory and response confidence of participants while watching faces photographs in an fMRI task. Functional activations of unrelated false recognition were identified by contrasting the activation on this condition vs. the activations related to recognition (hits) and correct rejections. The results identified the right precentral and cingulate gyri as areas with distinctive activations during false recognition events suggesting a conflict resulting in a dysfunction during memory retrieval. High confidence suggested that about 50% of misidentifications may be related to an unconscious process. These findings add to our understanding of the construction of facial memories and its biological basis, and the fallibility of the eyewitness testimony.

  2. Fusiform gyrus volume reduction and facial recognition in chronic schizophrenia.

    PubMed

    Onitsuka, Toshiaki; Shenton, Martha E; Kasai, Kiyoto; Nestor, Paul G; Toner, Sarah K; Kikinis, Ron; Jolesz, Ferenc A; McCarley, Robert W

    2003-04-01

    The fusiform gyrus (FG), or occipitotemporal gyrus, is thought to subserve the processing and encoding of faces. Of note, several studies have reported that patients with schizophrenia show deficits in facial processing. It is thus hypothesized that the FG might be one brain region underlying abnormal facial recognition in schizophrenia. The objectives of this study were to determine whether there are abnormalities in gray matter volumes for the anterior and the posterior FG in patients with chronic schizophrenia and to investigate relationships between FG subregions and immediate and delayed memory for faces. Patients were recruited from the Boston VA Healthcare System, Brockton Division, and control subjects were recruited through newspaper advertisement. Study participants included 21 male patients diagnosed as having chronic schizophrenia and 28 male controls. Participants underwent high-spatial-resolution magnetic resonance imaging, and facial recognition memory was evaluated. Main outcome measures included anterior and posterior FG gray matter volumes based on high-spatial-resolution magnetic resonance imaging, a detailed and reliable manual delineation using 3-dimensional information, and correlation coefficients between FG subregions and raw scores on immediate and delayed facial memory derived from the Wechsler Memory Scale III. Patients with chronic schizophrenia had overall smaller FG gray matter volumes (10%) than normal controls. Additionally, patients with schizophrenia performed more poorly than normal controls in both immediate and delayed facial memory tests. Moreover, the degree of poor performance on delayed memory for faces was significantly correlated with the degree of bilateral anterior FG reduction in patients with schizophrenia. These results suggest that neuroanatomic FG abnormalities underlie at least some of the deficits associated with facial recognition in schizophrenia.

  3. Effect of positive emotion on consolidation of memory for faces: the modulation of facial valence and facial gender.

    PubMed

    Wang, Bo

    2013-01-01

    Studies have shown that emotion elicited after learning enhances memory consolidation. However, no prior studies have used facial photos as stimuli. This study examined the effect of post-learning positive emotion on consolidation of memory for faces. During the learning participants viewed neutral, positive, or negative faces. Then they were assigned to a condition in which they either watched a 9-minute positive video clip, or a 9-minute neutral video. Then 30 minutes after the learning participants took a surprise memory test, in which they made "remember", "know", and "new" judgements. The findings are: (1) Positive emotion enhanced consolidation of recognition for negative male faces, but impaired consolidation of recognition for negative female faces; (2) For males, recognition for negative faces was equivalent to that for positive faces; for females, recognition for negative faces was better than that for positive faces. Our study provides the important evidence that effect of post-learning emotion on memory consolidation can extend to facial stimuli and such an effect can be modulated by facial valence and facial gender. The findings may shed light on establishing models concerning the influence of emotion on memory consolidation.

  4. Identity modulates short-term memory for facial emotion.

    PubMed

    Galster, Murray; Kahana, Michael J; Wilson, Hugh R; Sekuler, Robert

    2009-12-01

    For some time, the relationship between processing of facial expression and facial identity has been in dispute. Using realistic synthetic faces, we reexamined this relationship for both perception and short-term memory. In Experiment 1, subjects tried to identify whether the emotional expression on a probe stimulus face matched the emotional expression on either of two remembered faces that they had just seen. The results showed that identity strongly influenced recognition short-term memory for emotional expression. In Experiment 2, subjects' similarity/dissimilarity judgments were transformed by multidimensional scaling (MDS) into a 2-D description of the faces' perceptual representations. Distances among stimuli in the MDS representation, which showed a strong linkage of emotional expression and facial identity, were good predictors of correct and false recognitions obtained previously in Experiment 1. The convergence of the results from Experiments 1 and 2 suggests that the overall structure and configuration of faces' perceptual representations may parallel their representation in short-term memory and that facial identity modulates the representation of facial emotion, both in perception and in memory. The stimuli from this study may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental.

  5. Identity modulates short-term memory for facial emotion

    PubMed Central

    Galster, Murray; Kahana, Michael J.; Wilson, Hugh R.; Sekuler, Robert

    2010-01-01

    For some time, the relationship between processing of facial expression and facial identity has been in dispute. Using realistic synthetic faces, we reexamined this relationship for both perception and short-term memory. In Experiment 1, subjects tried to identify whether the emotional expression on a probe stimulus face matched the emotional expression on either of two remembered faces that they had just seen. The results showed that identity strongly influenced recognition short-term memory for emotional expression. In Experiment 2, subjects’ similarity/dissimilarity judgments were transformed by multidimensional scaling (MDS) into a 2-D description of the faces’ perceptual representations. Distances among stimuli in the MDS representation, which showed a strong linkage of emotional expression and facial identity, were good predictors of correct and false recognitions obtained previously in Experiment 1. The convergence of the results from Experiments 1 and 2 suggests that the overall structure and configuration of faces’ perceptual representations may parallel their representation in short-term memory and that facial identity modulates the representation of facial emotion, both in perception and in memory. The stimuli from this study may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental. PMID:19897794

  6. [Recognition of facial emotions and theory of mind in schizophrenia: could the theory of mind deficit be due to the non-recognition of facial emotions?].

    PubMed

    Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F

    2012-06-01

    The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the

  7. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    PubMed

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (P<0.05). The ASD group also had a significantly lower overall accuracy rate of facial expression recognition and significantly lower accuracy rates of the recognition of happy, angry, sad, and frightened expressions than the control group (P<0.05). In the ASD group, the overall accuracy rate of facial expression recognition and the accuracy rates of the recognition of happy and frightened expressions were positively correlated with PRI (r=0.415, 0.455, and 0.393 respectively; P<0.05). The accuracy rate of the recognition of angry expression was positively correlated with WMI (r=0.397; P<0.05). ASD children have delayed intelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  8. Facial Expression Influences Face Identity Recognition During the Attentional Blink

    PubMed Central

    2014-01-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076

  9. Facial expression influences face identity recognition during the attentional blink.

    PubMed

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  10. Computer Recognition of Facial Profiles

    DTIC Science & Technology

    1974-08-01

    facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class

  11. [Prosopagnosia and facial expression recognition].

    PubMed

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  12. Facial recognition deficits as a potential endophenotype in bipolar disorder.

    PubMed

    Vierck, Esther; Porter, Richard J; Joyce, Peter R

    2015-11-30

    Bipolar disorder (BD) is considered a highly heritable and genetically complex disorder. Several cognitive functions, such as executive functions and verbal memory have been suggested as promising candidates for endophenotypes. Although there is evidence for deficits in facial emotion recognition in individuals with BD, studies investigating these functions as endophenotypes are rare. The current study investigates emotion recognition as a potential endophenotype in BD by comparing 36 BD participants, 24 of their 1st degree relatives and 40 healthy control participants in a computerised facial emotion recognition task. Group differences were evaluated using repeated measurement analysis of co-variance with age as a covariate. Results revealed slowed emotion recognition for both BD and their relatives. Furthermore, BD participants were less accurate than healthy controls in their recognition of emotion expressions. We found no evidence of emotion specific differences between groups. Our results provide evidence for facial recognition as a potential endophenotype in BD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  14. Facial recognition performance of female inmates as a result of sexual assault history.

    PubMed

    Islam-Zwart, Kayleen A; Heath, Nicole M; Vik, Peter W

    2005-06-01

    This study examined the effect of sexual assault history on facial recognition performance. Gender of facial stimuli and posttraumatic stress disorder (PTSD) symptoms also were expected to influence performance. Fifty-six female inmates completed an interview and the Wechsler Memory Scale-Third Edition Faces I and Faces II subtests (Wechsler, 1997). Women with a sexual assault exhibited better immediate and delayed facial recognition skills than those with no assault history. There were no differences in performance based on the gender of faces or PTSD diagnosis. Immediate facial recognition was correlated with report of PTSD symptoms. Findings provide greater insight into women's reactions to, and the uniqueness of, the trauma of sexual victimization.

  15. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  16. Memory deficits for facial identity in patients with amnestic mild cognitive impairment (MCI).

    PubMed

    Savaskan, Egemen; Summermatter, Daniel; Schroeder, Clemens; Schächinger, Hartmut

    2018-01-01

    Faces are among the most relevant social stimuli revealing an encounter's identity and actual emotional state. Deficits in facial recognition may be an early sign of cognitive decline leading to social deficits. The main objective of the present study is to investigate if individuals with amnestic mild cognitive impairment show recognition deficits in facial identity. Thirty-seven individuals with amnestic mild cognitive impairment, multiple-domain (15 female; age: 75±8 yrs.) and forty-one healthy volunteers (24 female; age 71±6 yrs.) participated. All participants completed a human portrait memory test presenting unfamiliar faces with happy and angry emotional expressions. Five and thirty minutes later, old and new neutral faces were presented, and discrimination sensitivity (d') and response bias (C) were assessed as signal detection parameters of cued facial identity recognition. Memory performance was lower in amnestic mild cognitive impairment as compared to control subjects, mainly because of an altered response bias towards an increased false alarm rate (favoring false OLD ascription of NEW items). In both groups, memory performance declined between the early and later testing session, and was always better for acquired happy than angry faces. Facial identity memory is impaired in patients with amnestic mild cognitive impairment. Liberalization of the response bias may reflect a socially motivated compensatory mechanism maintaining an almost identical recognition hit rate of OLD faces in individuals with amnestic mild cognitive impairment.

  17. Effects of Lateral Reversal on Recognition Memory for Photographs of Faces.

    ERIC Educational Resources Information Center

    McKelvie, Stuart J.

    1983-01-01

    Examined recognition memory for photographs of faces in four experiments using students and adults. Results supported a feature (rather than Gestalt) model of facial recognition in which the two sides of the face are different in its memory representation. (JAC)

  18. Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.

    PubMed

    Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu

    2013-01-01

    Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

  19. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  20. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  1. Biometrics: A Look at Facial Recognition

    DTIC Science & Technology

    a facial recognition system in the city’s Oceanfront tourist area. The system has been tested and has recently been fully implemented. Senator...Kenneth W. Stolle, the Chairman of the Virginia State Crime Commission, established a Facial Recognition Technology Sub-Committee to examine the issue of... facial recognition technology. This briefing begins by defining biometrics and discussing examples of the technology. It then explains how biometrics

  2. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  3. Glucose enhancement of a facial recognition task in young adults.

    PubMed

    Metzger, M M

    2000-02-01

    Numerous studies have reported that glucose administration enhances memory processes in both elderly and young adult subjects. Although these studies have utilized a variety of procedures and paradigms, investigations of both young and elderly subjects have typically used verbal tasks (word list recall, paragraph recall, etc.). In the present study, the effect of glucose consumption on a nonverbal, facial recognition task in young adults was examined. Lemonade sweetened with either glucose (50 g) or saccharin (23.7 mg) was consumed by college students (mean age of 21.1 years) 15 min prior to a facial recognition task. The task consisted of a familiarization phase in which subjects were presented with "target" faces, followed immediately by a recognition phase in which subjects had to identify the targets among a random array of familiar target and novel "distractor" faces. Statistical analysis indicated that there were no differences on hit rate (target identification) for subjects who consumed either saccharin or glucose prior to the test. However, further analyses revealed that subjects who consumed glucose committed significantly fewer false alarms and had (marginally) higher d-prime scores (a signal detection measure) compared to subjects who consumed saccharin prior to the test. These results parallel a previous report demonstrating glucose enhancement of a facial recognition task in probable Alzheimer's patients; however, this is believed to be the first demonstration of glucose enhancement for a facial recognition task in healthy, young adults.

  4. Facial recognition in children after perinatal stroke.

    PubMed

    Ballantyne, A O; Trauner, D A

    1999-04-01

    To examine the effects of prenatal or perinatal stroke on the facial recognition skills of children and young adults. It was hypothesized that the nature and extent of facial recognition deficits seen in patients with early-onset lesions would be different from that seen in adults with later-onset neurologic impairment. Numerous studies with normal and neurologically impaired adults have found a right-hemisphere superiority for facial recognition. In contrast, little is known about facial recognition in children after early focal brain damage. Forty subjects had single, unilateral brain lesions from pre- or perinatal strokes (20 had left-hemisphere damage, and 20 had right-hemisphere damage), and 40 subjects were controls who were individually matched to the lesion subjects on the basis of age, sex, and socioeconomic status. Each subject was given the Short-Form of Benton's Test of Facial Recognition. Data were analyzed using the Wilcoxon matched-pairs signed-rank test and multiple regression. The lesion subjects performed significantly more poorly than did matched controls. There was no clear-cut lateralization effect, with the left-hemisphere group performing significantly more poorly than matched controls and the right-hemisphere group showing a trend toward poorer performance. Parietal lobe involvement, regardless of lesion side, adversely affected facial recognition performance in the lesion group. Results could not be accounted for by IQ differences between lesion and control groups, nor was lesion severity systematically related to facial recognition performance. Pre- or perinatal unilateral brain damage results in a subtle disturbance in facial recognition ability, independent of the side of the lesion. Parietal lobe involvement, in particular, has an adverse effect on facial recognition skills. These findings suggest that the parietal lobes may be involved in the acquisition of facial recognition ability from a very early point in brain development, but

  5. The Impact of Sex Differences on Odor Identification and Facial Affect Recognition in Patients with Schizophrenia Spectrum Disorders.

    PubMed

    Mossaheb, Nilufar; Kaufmann, Rainer M; Schlögelhofer, Monika; Aninilkumparambil, Thushara; Himmelbauer, Claudia; Gold, Anna; Zehetmayer, Sonja; Hoffmann, Holger; Traue, Harald C; Aschauer, Harald

    2018-01-01

    Social interactive functions such as facial emotion recognition and smell identification have been shown to differ between women and men. However, little is known about how these differences are mirrored in patients with schizophrenia and how these abilities interact with each other and with other clinical variables in patients vs. healthy controls. Standardized instruments were used to assess facial emotion recognition [Facially Expressed Emotion Labelling (FEEL)] and smell identification [University of Pennsylvania Smell Identification Test (UPSIT)] in 51 patients with schizophrenia spectrum disorders and 79 healthy controls; furthermore, working memory functions and clinical variables were assessed. In both the univariate and the multivariate results, illness showed a significant influence on UPSIT and FEEL. The inclusion of age and working memory in the MANOVA resulted in a differential effect with sex and working memory as remaining significant factors. Duration of illness was correlated with both emotion recognition and smell identification in men only, whereas immediate general psychopathology and negative symptoms were associated with emotion recognition only in women. Being affected by schizophrenia spectrum disorder impacts one's ability to correctly recognize facial affects and identify odors. Converging evidence suggests a link between the investigated basic and social cognitive abilities in patients with schizophrenia spectrum disorders with a strong contribution of working memory and differential effects of modulators in women vs. men.

  6. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    PubMed

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  7. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    PubMed

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  8. [Neurological disease and facial recognition].

    PubMed

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  9. Constructive autoassociative neural network for facial recognition.

    PubMed

    Fernandes, Bruno J T; Cavalcanti, George D C; Ren, Tsang I

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  10. Changes in brain activation during working memory and facial recognition tasks in patients with bipolar disorder with Lamotrigine monotherapy.

    PubMed

    Haldane, Morgan; Jogia, Jigar; Cobb, Annabel; Kozuch, Eliza; Kumari, Veena; Frangou, Sophia

    2008-01-01

    Verbal working memory and emotional self-regulation are impaired in Bipolar Disorder (BD). Our aim was to investigate the effect of Lamotrigine (LTG), which is effective in the clinical management of BD, on the neural circuits subserving working memory and emotional processing. Functional Magnetic Resonance Imaging data from 12 stable BD patients was used to detect LTG-induced changes as the differences in brain activity between drug-free and post-LTG monotherapy conditions during a verbal working memory (N-back sequential letter task) and an angry facial affect recognition task. For both tasks, LGT monotherapy compared to baseline was associated with increased activation mostly within the prefrontal cortex and cingulate gyrus, in regions normally engaged in verbal working memory and emotional processing. Therefore, LTG monotherapy in BD patients may enhance cortical function within neural circuits involved in memory and emotional self-regulation.

  11. Cognitive mechanisms of false facial recognition in older adults.

    PubMed

    Edmonds, Emily C; Glisky, Elizabeth L; Bartlett, James C; Rapcsak, Steven Z

    2012-03-01

    Older adults show elevated false alarm rates on recognition memory tests involving faces in comparison to younger adults. It has been proposed that this age-related increase in false facial recognition reflects a deficit in recollection and a corresponding increase in the use of familiarity when making memory decisions. To test this hypothesis, we examined the performance of 40 older adults and 40 younger adults on a face recognition memory paradigm involving three different types of lures with varying levels of familiarity. A robust age effect was found, with older adults demonstrating a markedly heightened false alarm rate in comparison to younger adults for "familiarized lures" that were exact repetitions of faces encountered earlier in the experiment, but outside the study list, and therefore required accurate recollection of contextual information to reject. By contrast, there were no age differences in false alarms to "conjunction lures" that recombined parts of study list faces, or to entirely new faces. Overall, the pattern of false recognition errors observed in older adults was consistent with excessive reliance on a familiarity-based response strategy. Specifically, in the absence of recollection older adults appeared to base their memory decisions on item familiarity, as evidenced by a linear increase in false alarm rates with increasing familiarity of the lures. These findings support the notion that automatic memory processes such as familiarity remain invariant with age, while more controlled memory processes such as recollection show age-related decline.

  12. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  13. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    PubMed

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Face memory and face recognition in children and adolescents with attention deficit hyperactivity disorder: A systematic review.

    PubMed

    Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco

    2018-06-01

    This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Visual Scan Paths and Recognition of Facial Identity in Autism Spectrum Disorder and Typical Development

    PubMed Central

    Wilson, C. Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378

  16. Gender differences in memory processing of female facial attractiveness: evidence from event-related potentials.

    PubMed

    Zhang, Yan; Wei, Bin; Zhao, Peiqiong; Zheng, Minxiao; Zhang, Lili

    2016-06-01

    High rates of agreement in the judgment of facial attractiveness suggest universal principles of beauty. This study investigated gender differences in recognition memory processing of female facial attractiveness. Thirty-four Chinese heterosexual participants (17 females, 17 males) aged 18-24 years (mean age 21.63 ± 1.51 years) participated in the experiment which used event-related potentials (ERPs) based on a study-test paradigm. The behavioral data results showed that both men and women had significantly higher accuracy rates for attractive faces than for unattractive faces, but men reacted faster to unattractive faces. Gender differences on ERPs showed that attractive faces elicited larger early components such as P1, N170, and P2 in men than in women. The results indicated that the effects of recognition bias during memory processing modulated by female facial attractiveness are greater for men than women. Behavioral and ERP evidences indicate that men and women differ in their attentional adhesion to attractive female faces; different mating-related motives may guide the selective processing of attractive men and women. These findings establish a contribution of gender differences on female facial attractiveness during memory processing from an evolutionary perspective.

  17. [Emotional facial expression recognition impairment in Parkinson disease].

    PubMed

    Lachenal-Chevallet, Karine; Bediou, Benoit; Bouvard, Martine; Thobois, Stéphane; Broussolle, Emmanuel; Vighetto, Alain; Krolak-Salmon, Pierre

    2006-03-01

    some behavioral disturbances observed in Parkinson's disease (PD) could be related to impaired recognition of various social messages particularly emotional facial expressions. facial expression recognition was assessed using morphed faces (five emotions: happiness, fear, anger, disgust, neutral), and compared to gender recognition and general cognitive assessment in 12 patients with Parkinson's disease and 14 controls subjects. facial expression recognition was impaired among patients, whereas gender recognitions, visuo-perceptive capacities and total efficiency were preserved. Post hoc analyses disclosed a deficit for fear and disgust recognition compared to control subjects. the impairment of emotional facial expression recognition in PD appears independent of other cognitive deficits. This impairment may be related to the dopaminergic depletion in basal ganglia and limbic brain regions. They could take a part in psycho-behavioral disorders and particularly in communication disorders observed in Parkinson's disease patients.

  18. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  19. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  20. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  1. Facial Recognition Training: Improving Intelligence Collection by Soldiers

    DTIC Science & Technology

    2008-01-01

    Facial Recognition Training: Improving Intelligence Collection by Soldiers By: 2LT Michael Mitchell, MI, ALARNG “In combat, you don’t rise to...technology, but on patrol a Soldier cannot use a device as quickly as simply looking at the subject. Why is Facial Recognition Difficult? Soldiers...00-2008 to 00-00-2008 4. TITLE AND SUBTITLE Facial Recognition Training: Improving Intelligence Collection by Soldiers 5a. CONTRACT NUMBER 5b

  2. Impaired recognition of happy facial expressions in bipolar disorder.

    PubMed

    Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M

    2014-08-01

    The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.

  3. Dynamic facial expression recognition based on geometric and texture features

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  4. Facial expression perception correlates with verbal working memory function in schizophrenia.

    PubMed

    Hagiya, Kumiko; Sumiyoshi, Tomiki; Kanie, Ayako; Pu, Shenghong; Kaneko, Koichi; Mogami, Tamiko; Oshima, Sachie; Niwa, Shin-ichi; Inagaki, Akiko; Ikebuchi, Emi; Kikuchi, Akiko; Yamasaki, Syudo; Iwata, Kazuhiko; Nakagome, Kazuyuki

    2015-12-01

    Facial emotion perception is considered to provide a measure of social cognition. Numerous studies have examined the perception of emotion in patients with schizophrenia, and the majority has reported impaired ability to recognize facial emotion perception. We aimed to investigate the correlation between facial expression recognition and other domains of social cognition and neurocognition in Japanese patients with schizophrenia. Participants were 52 patients with schizophrenia and 53 normal controls with no history of psychiatric diseases. All participants completed the Hinting Task and the Social Cognition Screening Questionnaire. The Brief Assessment of Cognition in Schizophrenia was administered only to the patients. Facial emotion perception measured by the Facial Emotion Selection Test (FEST) was compared between the patients and normal controls. Patients performed significantly worse on the FEST compared to normal control subjects. The FEST total score was significantly positively correlated with scores of the Brief Assessment of Cognition in Schizophrenia attention subscale, Hinting Task, Social Cognition Screening Questionnaire Verbal Working Memory and Metacognition subscales. Stepwise multiple regression analysis revealed that verbal working memory function was positively related to the facial emotion perception ability in patients with schizophrenia. These results point to the concept that facial emotion perception and some types of working memory use common cognitive resources. Our findings may provide implications for cognitive rehabilitation and related interventions in schizophrenia. © 2015 The Authors. Psychiatry and Clinical Neurosciences © 2015 Japanese Society of Psychiatry and Neurology.

  5. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  6. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    ERIC Educational Resources Information Center

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  7. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  8. Italian normative data and validation of two neuropsychological tests of face recognition: Benton Facial Recognition Test and Cambridge Face Memory Test.

    PubMed

    Albonico, Andrea; Malaspina, Manuela; Daini, Roberta

    2017-09-01

    The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.

  9. Facial emotion recognition and borderline personality pathology.

    PubMed

    Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio

    2017-09-01

    The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  10. Computerised working memory based cognitive remediation therapy does not affect Reading the Mind in the Eyes test performance or neural activity during a Facial Emotion Recognition test in psychosis.

    PubMed

    Mothersill, David; Dillon, Rachael; Hargreaves, April; Castorina, Marco; Furey, Emilia; Fagan, Andrew J; Meaney, James F; Fitzmaurice, Brian; Hallahan, Brian; McDonald, Colm; Wykes, Til; Corvin, Aiden; Robertson, Ian H; Donohoe, Gary

    2018-05-27

    Working memory based cognitive remediation therapy (CT) for psychosis has recently been associated with broad improvements in performance on untrained tasks measuring working memory, episodic memory and IQ, and changes in associated brain regions. However, it is unclear if these improvements transfer to the domain of social cognition and neural activity related to performance on social cognitive tasks. We examined performance on the Reading the Mind in the Eyes test (Eyes test) in a large sample of participants with psychosis who underwent working memory based CT (N = 43) compared to a Control Group of participants with psychosis (N = 35). In a subset of this sample, we used functional magnetic resonance imaging (fMRI) to examine changes in neural activity during a facial emotion recognition task in participants who underwent CT (N = 15) compared to a Control Group (N = 15). No significant effects of CT were observed on Eyes test performance or on neural activity during facial emotion recognition, either at p<0.05 family-wise error, or at a p<0.001 uncorrected threshold, within a priori social cognitive regions of interest. This study suggests that working memory based CT does not significantly impact an aspect of social cognition which was measured behaviourally and neurally. It provides further evidence that deficits in the ability to decode mental state from facial expressions are dissociable from working memory deficits, and suggests that future CT programs should target social cognition in addition to working memory for the purposes of further enhancing social function. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  11. Facial affect recognition deficit as a marker of genetic vulnerability to schizophrenia.

    PubMed

    Alfimova, Margarita V; Abramova, Lilia I; Barhatova, Aleksandra I; Yumatova, Polina E; Lyachenko, Galina L; Golimbet, Vera E

    2009-05-01

    The aim of this study was to investigate the possibility that affect recognition impairments are associated with genetic liability to schizophrenia. In a group of 55 unaffected relatives of schizophrenia patients (parents and siblings) we examined the capacity to detect facially expressed emotions and its relationship to schizotypal personality, neurocognitive functioning, and the subject's actual emotional state. The relatives were compared with 103 schizophrenia patients and 99 healthy subjects without any family history of psychoses. Emotional stimuli were nine black-and-white photos of actors, who portrayed six basic emotions as well as interest, contempt, and shame. The results evidenced the affect recognition deficit in relatives, though milder than that in patients themselves. No correlation between the deficit and schizotypal personality measured with SPQ was detected in the group of relatives. Neither cognitive functioning, including attention, verbal memory and linguistic ability, nor actual emotional states accounted for their affect recognition impairments. The results suggest that the facial affect recognition deficit in schizophrenia may be related to genetic predisposition to the disorder and may serve as an endophenotype in molecular-genetic studies.

  12. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  13. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  14. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  15. When your face describes your memories: facial expressions during retrieval of autobiographical memories.

    PubMed

    El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis

    2018-05-11

    Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.

  16. Facial expression recognition based on improved deep belief networks

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  17. Laptop Computer - Based Facial Recognition System Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master

  18. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  19. Gaze behavior predicts memory bias for angry facial expressions in stable dysphoria.

    PubMed

    Wells, Tony T; Beevers, Christopher G; Robison, Adrienne E; Ellis, Alissa J

    2010-12-01

    Interpersonal theories suggest that depressed individuals are sensitive to signs of interpersonal rejection, such as angry facial expressions. The present study examined memory bias for happy, sad, angry, and neutral facial expressions in stably dysphoric and stably nondysphoric young adults. Participants' gaze behavior (i.e., fixation duration, number of fixations, and distance between fixations) while viewing these facial expressions was also assessed. Using signal detection analyses, the dysphoric group had better accuracy on a surprise recognition task for angry faces than the nondysphoric group. Further, mediation analyses indicated that greater breadth of attentional focus (i.e., distance between fixations) accounted for enhanced recall of angry faces among the dysphoric group. There were no differences between dysphoria groups in gaze behavior or memory for sad, happy, or neutral facial expressions. Findings from this study identify a specific cognitive mechanism (i.e., breadth of attentional focus) that accounts for biased recall of angry facial expressions in dysphoria. This work also highlights the potential for integrating cognitive and interpersonal theories of depression.

  20. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    PubMed Central

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  1. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  2. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  3. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  4. Recognition of facial and musical emotions in Parkinson's disease.

    PubMed

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  5. Positive facial expressions during retrieval of self-defining memories.

    PubMed

    Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad

    2017-11-14

    In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.

  6. Influence of make-up on facial recognition.

    PubMed

    Ueda, Sayako; Koyama, Takamasa

    2010-01-01

    Make-up may enhance or disguise facial characteristics. The influence of wearing make-up on facial recognition could be of two kinds: (i) when women do not wear make-up and then are seen with make-up, and (ii) when women wear make-up and then are seen without make-up. A study is reported which shows that light make-up makes it easier to recognise a face, and heavy make-up makes it more difficult. Seeing initially a made-up face makes any subsequent facial recognition more difficult than initially seeing that face without make-up.

  7. Influences on Facial Emotion Recognition in Deaf Children

    ERIC Educational Resources Information Center

    Sidera, Francesc; Amadó, Anna; Martínez, Laura

    2017-01-01

    This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…

  8. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    PubMed

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  9. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.

  10. Detecting facial emotion recognition deficits in schizophrenia using dynamic stimuli of varying intensities.

    PubMed

    Hargreaves, A; Mothersill, O; Anderson, M; Lawless, S; Corvin, A; Donohoe, G

    2016-10-28

    Deficits in facial emotion recognition have been associated with functional impairments in patients with Schizophrenia (SZ). Whilst a strong ecological argument has been made for the use of both dynamic facial expressions and varied emotion intensities in research, SZ emotion recognition studies to date have primarily used static stimuli of a singular, 100%, intensity of emotion. To address this issue, the present study aimed to investigate accuracy of emotion recognition amongst patients with SZ and healthy subjects using dynamic facial emotion stimuli of varying intensities. To this end an emotion recognition task (ERT) designed by Montagne (2007) was adapted and employed. 47 patients with a DSM-IV diagnosis of SZ and 51 healthy participants were assessed for emotion recognition. Results of the ERT were tested for correlation with performance in areas of cognitive ability typically found to be impaired in psychosis, including IQ, memory, attention and social cognition. Patients were found to perform less well than healthy participants at recognising each of the 6 emotions analysed. Surprisingly, however, groups did not differ in terms of impact of emotion intensity on recognition accuracy; for both groups higher intensity levels predicted greater accuracy, but no significant interaction between diagnosis and emotional intensity was found for any of the 6 emotions. Accuracy of emotion recognition was, however, more strongly correlated with cognition in the patient cohort. Whilst this study demonstrates the feasibility of using ecologically valid dynamic stimuli in the study of emotion recognition accuracy, varying the intensity of the emotion displayed was not demonstrated to impact patients and healthy participants differentially, and thus may not be a necessary variable to include in emotion recognition research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Neuroticism and facial emotion recognition in healthy adults.

    PubMed

    Andric, Sanja; Maric, Nadja P; Knezevic, Goran; Mihaljevic, Marina; Mirjanic, Tijana; Velthorst, Eva; van Os, Jim

    2016-04-01

    The aim of the present study was to examine whether healthy individuals with higher levels of neuroticism, a robust independent predictor of psychopathology, exhibit altered facial emotion recognition performance. Facial emotion recognition accuracy was investigated in 104 healthy adults using the Degraded Facial Affect Recognition Task (DFAR). Participants' degree of neuroticism was estimated using neuroticism scales extracted from the Eysenck Personality Questionnaire and the Revised NEO Personality Inventory. A significant negative correlation between the degree of neuroticism and the percentage of correct answers on DFAR was found only for happy facial expression (significant after applying Bonferroni correction). Altered sensitivity to the emotional context represents a useful and easy way to obtain cognitive phenotype that correlates strongly with inter-individual variations in neuroticism linked to stress vulnerability and subsequent psychopathology. Present findings could have implication in early intervention strategies and staging models in psychiatry. © 2015 Wiley Publishing Asia Pty Ltd.

  12. Interest and attention in facial recognition.

    PubMed

    Burgess, Melinda C R; Weaver, George E

    2003-04-01

    When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.

  13. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  14. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    PubMed

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  15. Familial covariation of facial emotion recognition and IQ in schizophrenia.

    PubMed

    Andric, Sanja; Maric, Nadja P; Mihaljevic, Marina; Mirjanic, Tijana; van Os, Jim

    2016-12-30

    Alterations in general intellectual ability and social cognition in schizophrenia are core features of the disorder, evident at the illness' onset and persistent throughout its course. However, previous studies examining cognitive alterations in siblings discordant for schizophrenia yielded inconsistent results. Present study aimed to investigate the nature of the association between facial emotion recognition and general IQ by applying genetically sensitive cross-trait cross-sibling design. Participants (total n=158; patients, unaffected siblings, controls) were assessed using the Benton Facial Recognition Test, the Degraded Facial Affect Recognition Task (DFAR) and the Wechsler Adult Intelligence Scale-III. Patients had lower IQ and altered facial emotion recognition in comparison to other groups. Healthy siblings and controls did not significantly differ in IQ and DFAR performance, but siblings exhibited intermediate angry facial expression recognition. Cross-trait within-subject analyses showed significant associations between overall DFAR performance and IQ in all participants. Within-trait cross-sibling analyses found significant associations between patients' and siblings' IQ and overall DFAR performance, suggesting their familial clustering. Finally, cross-trait cross-sibling analyses revealed familial covariation of facial emotion recognition and IQ in siblings discordant for schizophrenia, further indicating their familial etiology. Both traits are important phenotypes for genetic studies and potential early clinical markers of schizophrenia-spectrum disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Emotional facial expressions differentially influence predictions and performance for face recognition.

    PubMed

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  17. Plastic surgery and the biometric e-passport: implications for facial recognition.

    PubMed

    Ologunde, Rele

    2015-04-01

    This correspondence comments on the challenges of plastic reconstructive and aesthetic surgery on the facial recognition algorithms employed by biometric passports. The limitations of facial recognition technology in patients who have undergone facial plastic surgery are also discussed. Finally, the advice of the UK HM passport office to people who undergo facial surgery is reported.

  18. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    PubMed

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  19. Facial Affect Recognition and Social Anxiety in Preschool Children

    ERIC Educational Resources Information Center

    Ale, Chelsea M.; Chorney, Daniel B.; Brice, Chad S.; Morris, Tracy L.

    2010-01-01

    Research relating anxiety and facial affect recognition has focused mostly on school-aged children and adults and has yielded mixed results. The current study sought to demonstrate an association among behavioural inhibition and parent-reported social anxiety, shyness, social withdrawal and facial affect recognition performance in 30 children,…

  20. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  1. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    PubMed

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  2. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  3. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  4. [Developmental change in facial recognition by premature infants during infancy].

    PubMed

    Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu

    2014-09-01

    Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.

  5. Facial emotion recognition in patients with focal and diffuse axonal injury.

    PubMed

    Yassin, Walid; Callahan, Brandy L; Ubukata, Shiho; Sugihara, Genichi; Murai, Toshiya; Ueda, Keita

    2017-01-01

    Facial emotion recognition impairment has been well documented in patients with traumatic brain injury. Studies exploring the neural substrates involved in such deficits have implicated specific grey matter structures (e.g. orbitofrontal regions), as well as diffuse white matter damage. Our study aims to clarify whether different types of injuries (i.e. focal vs. diffuse) will lead to different types of impairments on facial emotion recognition tasks, as no study has directly compared these patients. The present study examined performance and response patterns on a facial emotion recognition task in 14 participants with diffuse axonal injury (DAI), 14 with focal injury (FI) and 22 healthy controls. We found that, overall, participants with FI and DAI performed more poorly than controls on the facial emotion recognition task. Further, we observed comparable emotion recognition performance in participants with FI and DAI, despite differences in the nature and distribution of their lesions. However, the rating response pattern between the patient groups was different. This is the first study to show that pure DAI, without gross focal lesions, can independently lead to facial emotion recognition deficits and that rating patterns differ depending on the type and location of trauma.

  6. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  7. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  8. Effects of Minority Status on Facial Recognition and Naming Performance.

    ERIC Educational Resources Information Center

    Roberts, Richard J.; Hamsher, Kerry

    1984-01-01

    Examined the differential effects of minority status in Blacks (N=94) on a facial recognition test and a naming test. Results showed that performance on the facial recognition test was relatively free of racial bias, but this was not the case for visual naming. (LLL)

  9. More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder

    PubMed Central

    Goghari, Vina M; Sponheim, Scott R

    2012-01-01

    Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816

  10. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Marcia L.; Erikson, Rebecca L.; Lombardo, Nicholas J.

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available tomore » other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.« less

  11. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    PubMed

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. The recognition of facial emotion expressions in Parkinson's disease.

    PubMed

    Assogna, Francesca; Pontieri, Francesco E; Caltagirone, Carlo; Spalletta, Gianfranco

    2008-11-01

    A limited number of studies in Parkinson's Disease (PD) suggest a disturbance of recognition of facial emotion expressions. In particular, disgust recognition impairment has been reported in unmedicated and medicated PD patients. However, the results are rather inconclusive in the definition of the degree and the selectivity of emotion recognition impairment, and an associated impairment of almost all basic facial emotions in PD is also described. Few studies have investigated the relationship with neuropsychiatric and neuropsychological symptoms with mainly negative results. This inconsistency may be due to many different problems, such as emotion assessment, perception deficit, cognitive impairment, behavioral symptoms, illness severity and antiparkinsonian therapy. Here we review the clinical characteristics and neural structures involved in the recognition of specific facial emotion expressions, and the plausible role of dopamine transmission and dopamine replacement therapy in these processes. It is clear that future studies should be directed to clarify all these issues.

  13. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    PubMed Central

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  14. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-03-19

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  15. Facial emotion recognition ability: psychiatry nurses versus nurses from other departments.

    PubMed

    Gultekin, Gozde; Kincir, Zeliha; Kurt, Merve; Catal, Yasir; Acil, Asli; Aydin, Aybike; Özcan, Mualla; Delikkaya, Busra N; Kacar, Selma; Emul, Murat

    2016-12-01

    Facial emotion recognition is a basic element in non-verbal communication. Although some researchers have shown that recognizing facial expressions may be important in the interaction between doctors and patients, there are no studies concerning facial emotion recognition in nurses. Here, we aimed to investigate facial emotion recognition ability in nurses and compare the abilities between nurses from psychiatry and other departments. In this cross-sectional study, sixty seven nurses were divided into two groups according to their departments: psychiatry (n=31); and, other departments (n=36). A Facial Emotion Recognition Test, constructed from a set of photographs from Ekman and Friesen's book "Pictures of Facial Affect", was administered to all participants. In whole group, the highest mean accuracy rate of recognizing facial emotion was the happy (99.14%) while the lowest accurately recognized facial expression was fear (47.71%). There were no significant differences between two groups among mean accuracy rates in recognizing happy, sad, fear, angry, surprised facial emotion expressions (for all, p>0.05). The ability of recognizing disgusted and neutral facial emotions tended to be better in other nurses than psychiatry nurses (p=0.052 and p=0.053, respectively) Conclusion: This study was the first that revealed indifference in the ability of FER between psychiatry nurses and non-psychiatry nurses. In medical education curricula throughout the world, no specific training program is scheduled for recognizing emotional cues of patients. We considered that improving the ability of recognizing facial emotion expression in medical stuff might be beneficial in reducing inappropriate patient-medical stuff interaction.

  16. Recognition of facial emotions in neuropsychiatric disorders.

    PubMed

    Kohler, Christian G; Turner, Travis H; Gur, Raquel E; Gur, Ruben C

    2004-04-01

    Recognition of facial emotions represents an important aspect of interpersonal communication and is governed by select neural substrates. We present data on emotion recognition in healthy young adults utilizing a novel set of color photographs of evoked universal emotions. In addition, we review the recent literature on emotion recognition in psychiatric and neurologic disorders, and studies that compare different disorders.

  17. Use of Facial Recognition Software to Identify Disaster Victims With Facial Injuries.

    PubMed

    Broach, John; Yong, Rothsovann; Manuell, Mary-Elise; Nichols, Constance

    2017-10-01

    After large-scale disasters, victim identification frequently presents a challenge and a priority for responders attempting to reunite families and ensure proper identification of deceased persons. The purpose of this investigation was to determine whether currently commercially available facial recognition software can successfully identify disaster victims with facial injuries. Photos of 106 people were taken before and after application of moulage designed to simulate traumatic facial injuries. These photos as well as photos from volunteers' personal photo collections were analyzed by using facial recognition software to determine whether this technology could accurately identify a person with facial injuries. The study results suggest that a responder could expect to get a correct match between submitted photos and photos of injured patients between 39% and 45% of the time and a much higher percentage of correct returns if submitted photos were of optimal quality with percentages correct exceeding 90% in most situations. The present results suggest that the use of this software would provide significant benefit to responders. Although a correct result was returned only 40% of the time, this would still likely represent a benefit for a responder trying to identify hundreds or thousands of victims. (Disaster Med Public Health Preparedness. 2017;11:568-572).

  18. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  19. Quantifying facial expression recognition across viewing conditions.

    PubMed

    Goren, Deborah; Wilson, Hugh R

    2006-04-01

    Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.

  20. Face recognition by applying wavelet subband representation and kernel associative memory.

    PubMed

    Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam

    2004-01-01

    In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.

  1. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  2. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  3. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.

    PubMed

    Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi

    2012-12-01

    We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  4. Emotional facial recognition in proactive and reactive violent offenders.

    PubMed

    Philipp-Wiegmann, Florence; Rösler, Michael; Retz-Junginger, Petra; Retz, Wolfgang

    2017-10-01

    The purpose of this study is to analyse individual differences in the ability of emotional facial recognition in violent offenders, who were characterised as either reactive or proactive in relation to their offending. In accordance with findings of our previous study, we expected higher impairments in facial recognition in reactive than proactive violent offenders. To assess the ability to recognize facial expressions, the computer-based Facial Emotional Expression Labeling Test (FEEL) was performed. Group allocation of reactive und proactive violent offenders and assessment of psychopathic traits were performed by an independent forensic expert using rating scales (PROREA, PCL-SV). Compared to proactive violent offenders and controls, the performance of emotion recognition in the reactive offender group was significantly lower, both in total and especially in recognition of negative emotions such as anxiety (d = -1.29), sadness (d = -1.54), and disgust (d = -1.11). Furthermore, reactive violent offenders showed a tendency to interpret non-anger emotions as anger. In contrast, proactive violent offenders performed as well as controls. General and specific deficits in reactive violent offenders are in line with the results of our previous study and correspond to predictions of the Integrated Emotion System (IES, 7) and the hostile attribution processes (21). Due to the different error pattern in the FEEL test, the theoretical distinction between proactive and reactive aggression can be supported based on emotion recognition, even though aggression itself is always a heterogeneous act rather than a distinct one-dimensional concept.

  5. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  6. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  7. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition

    PubMed Central

    Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans

    2015-01-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602

  8. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  9. Intelligent Facial Recognition Systems: Technology advancements for security applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g.,more » fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.« less

  10. Accuracy of computer-assisted navigation: significant augmentation by facial recognition software.

    PubMed

    Glicksman, Jordan T; Reger, Christine; Parasher, Arjun K; Kennedy, David W

    2017-09-01

    Over the past 20 years, image guidance navigation has been used with increasing frequency as an adjunct during sinus and skull base surgery. These devices commonly utilize surface registration, where varying pressure of the registration probe and loss of contact with the face during the skin tracing process can lead to registration inaccuracies, and the number of registration points incorporated is necessarily limited. The aim of this study was to evaluate the use of novel facial recognition software for image guidance registration. Consecutive adults undergoing endoscopic sinus surgery (ESS) were prospectively studied. Patients underwent image guidance registration via both conventional surface registration and facial recognition software. The accuracy of both registration processes were measured at the head of the middle turbinate (MTH), middle turbinate axilla (MTA), anterior wall of sphenoid sinus (SS), and nasal tip (NT). Forty-five patients were included in this investigation. Facial recognition was accurate to within a mean of 0.47 mm at the MTH, 0.33 mm at the MTA, 0.39 mm at the SS, and 0.36 mm at the NT. Facial recognition was more accurate than surface registration at the MTH by an average of 0.43 mm (p = 0.002), at the MTA by an average of 0.44 mm (p < 0.001), and at the SS by an average of 0.40 mm (p < 0.001). The integration of facial recognition software did not adversely affect registration time. In this prospective study, automated facial recognition software significantly improved the accuracy of image guidance registration when compared to conventional surface registration. © 2017 ARS-AAOA, LLC.

  11. Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.

    PubMed

    Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko

    2008-01-01

    The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.

  12. Stability of facial emotion recognition performance in bipolar disorder.

    PubMed

    Martino, Diego J; Samamé, Cecilia; Strejilevich, Sergio A

    2016-09-30

    The aim of this study was to assess the performance in emotional processing over time in a sample of euthymic patients with bipolar disorder (BD). Performance in the facial recognition of the six basic emotions (surprise, anger, sadness, happiness, disgust, and fear) did not change during a follow-up period of almost 7 years. These preliminary results suggest that performance in facial emotion recognition might be stable over time in BD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  14. The association between PTSD and facial affect recognition.

    PubMed

    Williams, Christian L; Milanak, Melissa E; Judah, Matt R; Berenbaum, Howard

    2018-05-05

    The major aims of this study were to examine how, if at all, having higher levels of PTSD would be associated with performance on a facial affect recognition task in which facial expressions of emotion are superimposed on emotionally valenced, non-face images. College students with trauma histories (N = 90) completed a facial affect recognition task as well as measures of exposure to traumatic events, and PTSD symptoms. When the face and context matched, participants with higher levels of PTSD were significantly more accurate. When the face and context were mismatched, participants with lower levels of PTSD were more accurate than were those with higher levels of PTSD. These findings suggest that PTSD is associated with how people process affective information. Furthermore, these results suggest that the enhanced attention of people with higher levels of PTSD to affective information can be either beneficial or detrimental to their ability to accurately identify facial expressions of emotion. Limitations, future directions and clinical implications are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Infant Visual Recognition Memory

    ERIC Educational Resources Information Center

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2004-01-01

    Visual recognition memory is a robust form of memory that is evident from early infancy, shows pronounced developmental change, and is influenced by many of the same factors that affect adult memory; it is surprisingly resistant to decay and interference. Infant visual recognition memory shows (a) modest reliability, (b) good discriminant…

  16. Temporal lobe structures and facial emotion recognition in schizophrenia patients and nonpsychotic relatives.

    PubMed

    Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R

    2011-11-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.

  17. Facial Recognition in a Group-Living Cichlid Fish.

    PubMed

    Kohda, Masanori; Jordan, Lyndon Alexander; Hotta, Takashi; Kosaka, Naoya; Karino, Kenji; Tanaka, Hirokazu; Taniyama, Masami; Takeyama, Tomohiro

    2015-01-01

    The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans.

  18. Sex differences in facial emotion recognition across varying expression intensity levels from videos.

    PubMed

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.

  19. Sex differences in facial emotion recognition across varying expression intensity levels from videos

    PubMed Central

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674

  20. Facial recognition in primary focal dystonia.

    PubMed

    Rinnerthaler, Martina; Benecke, Cord; Bartha, Lisa; Entner, Tanja; Poewe, Werner; Mueller, Joerg

    2006-01-01

    The basal ganglia seem to be involved in emotional processing. Primary dystonia is a movement disorder considered to result from basal ganglia dysfunction, and the aim of the present study was to investigate emotion recognition in patients with primary focal dystonia. Thirty-two patients with primary cranial (n=12) and cervical (n=20) dystonia were compared to 32 healthy controls matched for age, sex, and educational level on the facially expressed emotion labeling (FEEL) test, a computer-based tool measuring a person's ability to recognize facially expressed emotions. Patients with cognitive impairment or depression were excluded. None of the patients received medication with a possible cognitive side effect profile and only those with mild to moderate dystonia were included. Patients with primary dystonia showed isolated deficits in the recognition of disgust (P=0.007), while no differences between patients and controls were found with regard to the other emotions (fear, happiness, surprise, sadness, and anger). The findings of the present study add further evidence to the conception that dystonia is not only a motor but a complex basal ganglia disorder including selective emotion recognition disturbances. Copyright (c) 2005 Movement Disorder Society.

  1. Intact anger recognition in depression despite aberrant visual facial information usage.

    PubMed

    Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M

    2014-08-01

    Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  3. Recognition of computerized facial approximations by familiar assessors.

    PubMed

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  4. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    PubMed

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models.

    PubMed

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner's faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals.

  6. Identity recognition and happy and sad facial expression recall: influence of depressive symptoms.

    PubMed

    Jermann, Françoise; van der Linden, Martial; D'Argembeau, Arnaud

    2008-05-01

    Relatively few studies have examined memory bias for social stimuli in depression or dysphoria. The aim of this study was to investigate the influence of depressive symptoms on memory for facial information. A total of 234 participants completed the Beck Depression Inventory II and a task examining memory for facial identity and expression of happy and sad faces. For both facial identity and expression, the recollective experience was measured with the Remember/Know/Guess procedure (Gardiner & Richardson-Klavehn, 2000). The results show no major association between depressive symptoms and memory for identities. However, dysphoric individuals consciously recalled (Remember responses) more sad facial expressions than non-dysphoric individuals. These findings suggest that sad facial expressions led to more elaborate encoding, and thereby better recollection, in dysphoric individuals.

  7. Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A Comparative Study

    DTIC Science & Technology

    2006-01-01

    Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A Comparative Study ∗ Andrea Selinger† Diego A. Socolinsky‡ †Equinox...TYPE 3. DATES COVERED 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A

  8. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  9. The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition

    ERIC Educational Resources Information Center

    Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.

    2006-01-01

    This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…

  10. The Chinese Facial Emotion Recognition Database (CFERD): a computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities.

    PubMed

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2012-12-30

    The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1

  11. Age, gender, and puberty influence the development of facial emotion recognition.

    PubMed

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  12. Age, gender, and puberty influence the development of facial emotion recognition

    PubMed Central

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers. PMID:26136697

  13. A Neural Basis of Facial Action Recognition in Humans

    PubMed Central

    Srinivasan, Ramprakash; Golomb, Julie D.

    2016-01-01

    By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and

  14. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    PubMed

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  15. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    PubMed Central

    Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho

    2013-01-01

    Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568

  16. The Relationships among Facial Emotion Recognition, Social Skills, and Quality of Life.

    ERIC Educational Resources Information Center

    Simon, Elliott W.; And Others

    1995-01-01

    Forty-six institutionalized adults with mild or moderate mental retardation were administered the Vineland Adaptive Behavior Scales (socialization domain), a subjective measure of quality of life, and a facial emotion recognition test. Facial emotion recognition, quality of life, and social skills appeared to be independent of one another. Facial…

  17. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  18. Specific Impairments in the Recognition of Emotional Facial Expressions in Parkinson’s Disease

    PubMed Central

    Clark, Uraina S.; Neargarder, Sandy; Cronin-Golomb, Alice

    2008-01-01

    Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD. PMID:18485422

  19. [Impact of facial emotional recognition alterations in Dementia of the Alzheimer type].

    PubMed

    Rubinstein, Wanda; Cossini, Florencia; Politis, Daniel

    2016-07-01

    Face recognition of basic emotions is independent of other deficits in dementia of the Alzheimer type. Among these deficits, there is disagreement about what emotions are more difficult to recognize. Our aim was to study the presence of alterations in the process of facial recognition of basic emotions, and to investigate if there were differences in the recognition of each type of emotion in Alzheimer's disease. With three tests of recognition of basic facial emotions we evaluated 29 patients who had been diagnosed with dementia of the Alzheimer type and 18 control subjects. Significant differences were obtained in tests of recognition of basic facial emotions and between each. Since the amygdala, one of the brain structures responsible for emotional reaction, is affected in the early stages of this disease, our findings become relevant to understand how this alteration of the process of emotional recognition impacts the difficulties these patients have in both interpersonal relations and behavioral disorders.

  20. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models

    PubMed Central

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner’s faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals. PMID:27191162

  1. In what sense 'familiar'? Examining experiential differences within pathologies of facial recognition.

    PubMed

    Young, Garry

    2009-09-01

    Explanations of Capgras delusion and prosopagnosia typically incorporate a dual-route approach to facial recognition in which a deficit in overt or covert processing in one condition is mirror-reversed in the other. Despite this double dissociation, experiences of either patient-group are often reported in the same way--as lacking a sense of familiarity toward familiar faces. In this paper, deficits in the facial processing of these patients are compared to other facial recognition pathologies, and their experiential characteristics mapped onto the dual-route model in order to provide a less ambiguous link between facial processing and experiential content. The paper concludes that the experiential states of Capgras delusion, prosopagnosia, and related facial pathologies are quite distinct, and that this descriptive distinctiveness finds explanatory equivalence at the level of anatomical and functional disruption within the face recognition system. The role of skin conductance response (SCR) as a measure of 'familiarity' is also clarified.

  2. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    ERIC Educational Resources Information Center

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  3. Updating schematic emotional facial expressions in working memory: Response bias and sensitivity.

    PubMed

    Tamm, Gerly; Kreegipuu, Kairi; Harro, Jaanus; Cowan, Nelson

    2017-01-01

    It is unclear if positive, negative, or neutral emotional expressions have an advantage in short-term recognition. Moreover, it is unclear from previous studies of working memory for emotional faces whether effects of emotions comprise response bias or sensitivity. The aim of this study was to compare how schematic emotional expressions (sad, angry, scheming, happy, and neutral) are discriminated and recognized in an updating task (2-back recognition) in a representative sample of birth cohort of young adults. Schematic facial expressions allow control of identity processing, which is separate from expression processing, and have been used extensively in attention research but not much, until now, in working memory research. We found that expressions with a U-curved mouth (i.e., upwardly curved), namely happy and scheming expressions, favoured a bias towards recognition (i.e., towards indicating that the probe and the stimulus in working memory are the same). Other effects of emotional expression were considerably smaller (1-2% of the variance explained)) compared to a large proportion of variance that was explained by the physical similarity of items being compared. We suggest that the nature of the stimuli plays a role in this. The present application of signal detection methodology with emotional, schematic faces in a working memory procedure requiring fast comparisons helps to resolve important contradictions that have emerged in the emotional perception literature. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Recognition of Facial Emotional Expression in Amnestic Mild Cognitive Impairment

    PubMed Central

    Varjassyová, Alexandra; Hořínek, Daniel; Andel, Ross; Amlerova, Jana; Laczó, Jan; Sheardová, Kateřina; Magerová, Hana; Holmerová, Iva; Vyhnálek, Martin; Bradáč, Ondřej; Geda, Yonas E.; Hort, Jakub

    2014-01-01

    We examined whether recognition of facial emotional expression would be affected in amnestic mild cognitive impairment (aMCI). A total of 50 elderly persons met the initial inclusion criteria, 10 were subsequently excluded (Geriatric Depression Score >5). 22 subjects were classified with aMCI based on published criteria (single domain aMCI [SD-aMCI], n = 10; multiple domain aMCI [MD-aMCI], n = 12); 18 subjects were cognitively normal. All underwent standard neurological and neuropsychological evaluations as well as tests of facial emotion recognition (FER) and famous faces identification (FFI). Among normal controls, FFI was negatively correlated with MMSE and positively correlated with executive function. Among patients with aMCI, FER was correlated with attention/speed of processing. No other correlations were significant. In a multinomial logistic regression model adjusted for age, sex, and education, a poorer score on FER, but not on FFI, was associated with greater odds of being classified as MD-aMCI (odds ratio [OR], 3.82; 95% confidence interval [CI], 1.05–13.91; p = 0.042). This association was not explained by memory or global cognitive score. There was no association between FER or FFI and SD-aMCI (OR, 1.13; 95% CI, 0.36–3.57; p = 0.836). Therefore, FER, but not FFI, may be impaired in MD-aMCI. This implies that in MD-aMCI, the tasks of FER and FFI may involve segregated neurocognitive networks. PMID:22954669

  5. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    ERIC Educational Resources Information Center

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  6. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  7. Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men.

    PubMed

    Hoffmann, Holger; Kessler, Henrik; Eppel, Tobias; Rukavina, Stefanie; Traue, Harald C

    2010-11-01

    Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Facial expressions recognition with an emotion expressive robotic head

    NASA Astrophysics Data System (ADS)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  9. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  10. Deficits in recognition, identification, and discrimination of facial emotions in patients with bipolar disorder.

    PubMed

    Benito, Adolfo; Lahera, Guillermo; Herrera, Sara; Muncharaz, Ramón; Benito, Guillermo; Fernández-Liria, Alberto; Montes, José Manuel

    2013-01-01

    To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD). Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40), identification (Facial Emotion Identification Test - FEIT), and discrimination (Facial Emotion Discrimination Test - FEDT) of facial emotions, as well as a theory of mind (ToM) verbal test (Hinting Task). Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Patients with BD scored significantly lower than controls on recognition (ER40), identification (FEIT), and discrimination (FEDT) of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST) was found. These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.

  11. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  12. Facial emotion recognition is inversely correlated with tremor severity in essential tremor.

    PubMed

    Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G

    2014-04-01

    We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.

  13. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Dissociation between facial and bodily expressions in emotion recognition: A case study.

    PubMed

    Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo

    2017-12-21

    Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.

  15. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    PubMed

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  16. Recovering faces from memory: the distracting influence of external facial features.

    PubMed

    Frowd, Charlie D; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H; Hancock, Peter J B

    2012-06-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried out by witnesses and victims of crime, the role of external features (hair, ears, and neck) is less clear, although research does suggest their involvement. Here, over three experiments, we investigate the impact of external features for recovering facial memories using a modern, recognition-based composite system, EvoFIT. Participant-constructors inspected an unfamiliar target face and, one day later, repeatedly selected items from arrays of whole faces, with "breeding," to "evolve" a composite with EvoFIT; further participants (evaluators) named the resulting composites. In Experiment 1, the important internal-features (eyes, brows, nose, and mouth) were constructed more identifiably when the visual presence of external features was decreased by Gaussian blur during construction: higher blur yielded more identifiable internal-features. In Experiment 2, increasing the visible extent of external features (to match the target's) in the presented face-arrays also improved internal-features quality, although less so than when external features were masked throughout construction. Experiment 3 demonstrated that masking external-features promoted substantially more identifiable images than using the previous method of blurring external-features. Overall, the research indicates that external features are a distractive rather than a beneficial cue for face construction; the results also provide a much better method to construct composites, one that should dramatically increase identification of offenders.

  17. Oxytocin Promotes Facial Emotion Recognition and Amygdala Reactivity in Adults with Asperger Syndrome

    PubMed Central

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-01-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS. PMID:24067301

  18. Oxytocin promotes facial emotion recognition and amygdala reactivity in adults with asperger syndrome.

    PubMed

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-02-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.

  19. Association of enhanced limbic response to threat with decreased cortical facial recognition memory response in schizophrenia

    PubMed Central

    Satterthwaite, Theodore D.; Wolf, Daniel H.; Loughead, James; Ruparel, Kosha; Valdez, Jeffrey N.; Siegel, Steven J.; Kohler, Christian G.; Gur, Raquel E.; Gur, Ruben C.

    2014-01-01

    Objective Recognition memory of faces is impaired in patients with schizophrenia, as is the neural processing of threat-related signals, but how these deficits interact to produce symptoms is unclear. Here we used an affective face recognition paradigm to examine possible interactions between cognitive and affective neural systems in schizophrenia. Methods fMRI (3T) BOLD response was examined in 21 controls and 16 patients during a two-choice recognition task using images of human faces. Each target face had previously been displayed with a threatening or non-threatening affect, but here were displayed with neutral affect. Responses to successful recognition and for the effect of previously threatening vs. non-threatening affect were evaluated, and correlations with total BPRS examined. Functional connectivity analyses examined the relationship between activation in the amygdala and cortical regions involved in recognition memory. Results Patients performed the task more slowly than controls. Controls recruited the expected cortical regions to a greater degree than patients, and patients with more severe symptoms demonstrated proportionally less recruitment. Increased symptoms were also correlated with augmented amygdala and orbitofrontal cortex response to threatening faces. Controls exhibited a negative correlation between activity in the amygdala and cortical regions involved in cognition, while patients showed a weakening of that relationship. Conclusions Increased symptoms were related to an enhanced threat response in limbic regions and a diminished recognition memory response in cortical regions, supporting a link between two brain systems often examined in isolation. This finding suggests that abnormal processing of threat-related signals in the environment may exacerbate cognitive impairment in schizophrenia. PMID:20194482

  20. Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.

    PubMed

    Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T

    2013-01-01

    Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.

  1. Facial affect recognition in symptomatically remitted patients with schizophrenia and bipolar disorder.

    PubMed

    Yalcin-Siedentopf, Nursen; Hoertnagl, Christine M; Biedermann, Falko; Baumgartner, Susanne; Deisenhammer, Eberhard A; Hausmann, Armand; Kaufmann, Alexandra; Kemmler, Georg; Mühlbacher, Moritz; Rauch, Anna-Sophia; Fleischhacker, W Wolfgang; Hofer, Alex

    2014-02-01

    Both schizophrenia and bipolar disorder (BD) have consistently been associated with deficits in facial affect recognition (FAR). These impairments have been related to various aspects of social competence and functioning and are relatively stable over time. However, individuals in remission may outperform patients experiencing an acute phase of the disorders. The present study directly contrasted FAR in symptomatically remitted patients with schizophrenia or BD and healthy volunteers and investigated its relationship with patients' outcomes. Compared to healthy control subjects, schizophrenia patients were impaired in the recognition of angry, disgusted, sad and happy facial expressions, while BD patients showed deficits only in the recognition of disgusted and happy facial expressions. When directly comparing the two patient groups individuals suffering from BD outperformed those with schizophrenia in the recognition of expressions depicting anger. There was no significant association between affect recognition abilities and symptomatic or psychosocial outcomes in schizophrenia patients. Among BD patients, relatively higher depression scores were associated with impairments in both the identification of happy faces and psychosocial functioning. Overall, our findings indicate that during periods of symptomatic remission the recognition of facial affect may be less impaired in patients with BD than in those suffering from schizophrenia. However, in the psychosocial context BD patients seem to be more sensitive to residual symptomatology. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  3. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    ERIC Educational Resources Information Center

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  4. Recognition memory for emotional and neutral faces: an event-related potential study.

    PubMed

    Johansson, Mikael; Mecklinger, Axel; Treese, Anne-Cécile

    2004-12-01

    This study examined emotional influences on the hypothesized event-related potential (ERP) correlates of familiarity and recollection (Experiment 1) and the states of awareness (Experiment 2) accompanying recognition memory for faces differing in facial affect. Participants made gender judgments to positive, negative, and neutral faces at study and were in the test phase instructed to discriminate between studied and nonstudied faces. Whereas old-new discrimination was unaffected by facial expression, negative faces were recollected to a greater extent than both positive and neutral faces as reflected in the parietal ERP old-new effect and in the proportion of remember judgments. Moreover, emotion-specific modulations were observed in frontally recorded ERPs elicited by correctly rejected new faces that concurred with a more liberal response criterion for emotional as compared to neutral faces. Taken together, the results are consistent with the view that processes promoting recollection are facilitated for negative events and that emotion may affect recognition performance by influencing criterion setting mediated by the prefrontal cortex.

  5. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  6. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Instructions to mimic improve facial emotion recognition in people with sub-clinical autism traits.

    PubMed

    Lewis, Michael B; Dunn, Emily

    2017-11-01

    People tend to mimic the facial expression of others. It has been suggested that this helps provide social glue between affiliated people but it could also aid recognition of emotions through embodied cognition. The degree of facial mimicry, however, varies between individuals and is limited in people with autism spectrum conditions (ASC). The present study sought to investigate the effect of promoting facial mimicry during a facial-emotion-recognition test. In two experiments, participants without an ASC diagnosis had their autism quotient (AQ) measured. Following a baseline test, they did an emotion-recognition test again but half of the participants were asked to mimic the target face they saw prior to making their responses. Mimicry improved emotion recognition, and further analysis revealed that the largest improvement was for participants who had higher scores on the autism traits. In fact, recognition performance was best overall for people who had high AQ scores but also received the instruction to mimic. Implications for people with ASC are explored.

  8. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Facial soft biometric features for forensic face recognition.

    PubMed

    Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier

    2015-12-01

    This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.

    PubMed

    Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C

    2017-11-01

    Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.

  11. Development of Emotional Facial Recognition in Late Childhood and Adolescence

    ERIC Educational Resources Information Center

    Thomas, Laura A.; De Bellis, Michael D.; Graham, Reiko; Labar, Kevin S.

    2007-01-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents…

  12. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  13. Biases in facial and vocal emotion recognition in chronic schizophrenia

    PubMed Central

    Dondaine, Thibaut; Robert, Gabriel; Péron, Julie; Grandjean, Didier; Vérin, Marc; Drapier, Dominique; Millet, Bruno

    2014-01-01

    There has been extensive research on impaired emotion recognition in schizophrenia in the facial and vocal modalities. The literature points to biases toward non-relevant emotions for emotional faces but few studies have examined biases in emotional recognition across different modalities (facial and vocal). In order to test emotion recognition biases, we exposed 23 patients with stabilized chronic schizophrenia and 23 healthy controls (HCs) to emotional facial and vocal tasks asking them to rate emotional intensity on visual analog scales. We showed that patients with schizophrenia provided higher intensity ratings on the non-target scales (e.g., surprise scale for fear stimuli) than HCs for the both tasks. Furthermore, with the exception of neutral vocal stimuli, they provided the same intensity ratings on the target scales as the HCs. These findings suggest that patients with chronic schizophrenia have emotional biases when judging emotional stimuli in the visual and vocal modalities. These biases may stem from a basic sensorial deficit, a high-order cognitive dysfunction, or both. The respective roles of prefrontal-subcortical circuitry and the basal ganglia are discussed. PMID:25202287

  14. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces - pain vigilance, pain-related catastrophizing, fear of pain--were assessed using self-report questionnaires. Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues.

  15. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  16. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    PubMed

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  17. [Recognition of facial expression of emotions in Parkinson's disease: a theoretical review].

    PubMed

    Alonso-Recio, L; Serrano-Rodriguez, J M; Carvajal-Molina, F; Loeches-Alonso, A; Martin-Plasencia, P

    2012-04-16

    Emotional facial expression is a basic guide during social interaction and, therefore, alterations in their expression or recognition are important limitations for communication. To examine facial expression recognition abilities and their possible impairment in Parkinson's disease. First, we review the studies on this topic which have not found entirely similar results. Second, we analyze the factors that may explain these discrepancies and, in particular, as third objective, we consider the relationship between emotional recognition problems and cognitive impairment associated with the disease. Finally, we propose alternatives strategies for the development of studies that could clarify the state of these abilities in Parkinson's disease. Most studies suggest deficits in facial expression recognition, especially in those with negative emotional content. However, it is possible that these alterations are related to those that also appear in the course of the disease in other perceptual and executive processes. To advance in this issue, we consider necessary to design emotional recognition studies implicating differentially the executive or visuospatial processes, and/or contrasting cognitive abilities with facial expressions and non emotional stimuli. The precision of the status of these abilities, as well as increase our knowledge of the functional consequences of the characteristic brain damage in the disease, may indicate if we should pay special attention in their rehabilitation inside the programs implemented.

  18. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  19. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  20. Recognition of children on age-different images: Facial morphology and age-stable features.

    PubMed

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  1. Mapping structural covariance networks of facial emotion recognition in early psychosis: A pilot study.

    PubMed

    Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean

    2017-11-01

    People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Association of enhanced limbic response to threat with decreased cortical facial recognition memory response in schizophrenia.

    PubMed

    Satterthwaite, Theodore D; Wolf, Daniel H; Loughead, James; Ruparel, Kosha; Valdez, Jeffrey N; Siegel, Steven J; Kohler, Christian G; Gur, Raquel E; Gur, Ruben C

    2010-04-01

    Recognition memory of faces is impaired in patients with schizophrenia, as is the neural processing of threat-related signals, but how these deficits interact to produce symptoms is unclear. The authors used an affective face recognition paradigm to examine possible interactions between cognitive and affective neural systems in schizophrenia. Blood-oxygen-level-dependent response was examined by means of functional magnetic resonance imaging (3 Tesla) in healthy comparison subjects (N=21) and in patients with schizophrenia (N=12) or schizoaffective disorder, depressed type (N=4), during a two-choice recognition task that used images of human faces. Each target face, previously displayed with a threatening or nonthreatening affect, was displayed with neutral affect. Responses to successful recognition and responses to the effect of previously threatening versus nonthreatening affect were evaluated, and correlations with symptom severity (total Brief Psychiatric Rating Scale score) were examined. Functional connectivity analyses examined the relationship between activation in the amygdala and cortical regions involved in recognition memory. Patients performed the task more slowly than healthy comparison subjects. Comparison subjects recruited the expected cortical regions to a greater degree than patients, and patients with more severe symptoms demonstrated proportionally less recruitment. Increased symptoms were also correlated with augmented amygdala and orbitofrontal cortex response to threatening faces. Comparison subjects exhibited a negative correlation between activity in the amygdala and cortical regions involved in cognition, while patients showed weakening of this relationship. Increased symptoms were related to an enhanced threat response in limbic regions and a diminished recognition memory response in cortical regions, supporting a link between these two brain systems that are often examined in isolation. This finding suggests that abnormal processing of

  3. [The effect of the serotonin transporter 5-HTTLPR polymorphism on the recognition of facial emotions in schizophrenia].

    PubMed

    Alfimova, M V; Golimbet, V E; Korovaitseva, G I; Lezheiko, T V; Abramova, L I; Aksenova, E V; Bolgov, M I

    2014-01-01

    The 5-HTTLPR SLC6A4 and catechol-o-methyltransferase (COMT) Val158Met polymorphisms are reported to be associated with processing of facial expressions in general population. Impaired recognition of facial expressions that is characteristic of schizophrenia negatively impacts on the social adaptation of the patients. To search for molecular mechanisms of this deficit, we studied main and epistatic effects of 5-HTTLPR and Val158Met polymorphisms on the facial emotion recognition in patients with schizophrenia (n=299) and healthy controls (n=232). The 5-HTTLPR polymorphism was associated with the emotion recognition in patients. The ll-homozygotes recognized facial emotions significantly better compared to those with an s-allele (F=8.00; p=0.005). Although the recognition of facial emotions was correlated with negative symptoms, verbal learning and trait anxiety, these variables did not significantly modified the association. In both groups, no effect of the COMT on the recognition of facial emotions was found.

  4. An Age-Related Dissociation of Short-Term Memory for Facial Identity and Facial Emotional Expression.

    PubMed

    Hartley, Alan A; Ravich, Zoe; Stringer, Sarah; Wiley, Katherine

    2015-09-01

    Memory for both facial emotional expression and facial identity was explored in younger and older adults in 3 experiments using a delayed match-to-sample procedure. Memory sets of 1, 2, or 3 faces were presented, which were followed by a probe after a 3-s retention interval. There was very little difference between younger and older adults in memory for emotional expressions, but memory for identity was substantially impaired in the older adults. Possible explanations for spared memory for emotional expressions include socioemotional selectivity theory as well as the existence of overlapping yet distinct brain networks for processing of different emotions. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  6. The Change in Facial Emotion Recognition Ability in Inpatients with Treatment Resistant Schizophrenia After Electroconvulsive Therapy.

    PubMed

    Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi

    2017-09-01

    People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p < 0.05) and no significant changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.

  7. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications.

    PubMed

    Parks, Connie L; Monson, Keith L

    2017-04-01

    The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.

  8. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed Central

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    BACKGROUND: It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. OBJECTIVES: Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. METHODS: Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces – pain vigilance, pain-related catastrophizing, fear of pain – were assessed using self-report questionnaires. RESULTS: Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. CONCLUSIONS: Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues. PMID:23717826

  10. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  11. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    PubMed

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r(2)  = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.

  12. Facial Emotion Recognition Impairments are Associated with Brain Volume Abnormalities in Individuals with HIV

    PubMed Central

    Clark, Uraina S.; Walker, Keenan A.; Cohen, Ronald A.; Devlin, Kathryn N.; Folkers, Anna M.; Pina, Mathew M.; Tashima, Karen T.

    2015-01-01

    Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV− associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities. PMID:25744868

  13. Effect of camera resolution and bandwidth on facial affect recognition.

    PubMed

    Cruz, Mario; Cruz, Robyn Flaum; Krupinski, Elizabeth A; Lopez, Ana Maria; McNeeley, Richard M; Weinstein, Ronald S

    2004-01-01

    This preliminary study explored the effect of camera resolution and bandwidth on facial affect recognition, an important process and clinical variable in mental health service delivery. Sixty medical students and mental health-care professionals were recruited and randomized to four different combinations of commonly used teleconferencing camera resolutions and bandwidths: (1) one chip charged coupling device (CCD) camera, commonly used for VHSgrade taping and in teleconferencing systems costing less than $4,000 with a resolution of 280 lines, and 128 kilobytes per second bandwidth (kbps); (2) VHS and 768 kbps; (3) three-chip CCD camera, commonly used for Betacam (Beta) grade taping and in teleconferencing systems costing more than $4,000 with a resolution of 480 lines, and 128 kbps; and (4) Betacam and 768 kbps. The subjects were asked to identify four facial affects dynamically presented on videotape by an actor and actress presented via a video monitor at 30 frames per second. Two-way analysis of variance (ANOVA) revealed a significant interaction effect for camera resolution and bandwidth (p = 0.02) and a significant main effect for camera resolution (p = 0.006), but no main effect for bandwidth was detected. Post hoc testing of interaction means, using the Tukey Honestly Significant Difference (HSD) test and the critical difference (CD) at the 0.05 alpha level = 1.71, revealed subjects in the VHS/768 kbps (M = 7.133) and VHS/128 kbps (M = 6.533) were significantly better at recognizing the displayed facial affects than those in the Betacam/768 kbps (M = 4.733) or Betacam/128 kbps (M = 6.333) conditions. Camera resolution and bandwidth combinations differ in their capacity to influence facial affect recognition. For service providers, this study's results support the use of VHS cameras with either 768 kbps or 128 kbps bandwidths for facial affect recognition compared to Betacam cameras. The authors argue that the results of this study are a consequence of the

  14. Emotional recognition from dynamic facial, vocal and musical expressions following traumatic brain injury.

    PubMed

    Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle

    2017-01-01

    To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.

  15. Multimedia Content Development as a Facial Expression Datasets for Recognition of Human Emotions

    NASA Astrophysics Data System (ADS)

    Mamonto, N. E.; Maulana, H.; Liliana, D. Y.; Basaruddin, T.

    2018-02-01

    Datasets that have been developed before contain facial expression from foreign people. The development of multimedia content aims to answer the problems experienced by the research team and other researchers who will conduct similar research. The method used in the development of multimedia content as facial expression datasets for human emotion recognition is the Villamil-Molina version of the multimedia development method. Multimedia content developed with 10 subjects or talents with each talent performing 3 shots with each capturing talent having to demonstrate 19 facial expressions. After the process of editing and rendering, tests are carried out with the conclusion that the multimedia content can be used as a facial expression dataset for recognition of human emotions.

  16. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: Meta-analytic findings

    PubMed Central

    Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.

    2014-01-01

    Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469

  17. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: meta-analytic findings.

    PubMed

    Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S

    2013-12-01

    In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.

  18. Facial emotion recognition in Williams syndrome and Down syndrome: A matching and developmental study.

    PubMed

    Martínez-Castilla, Pastora; Burt, Michael; Borgatti, Renato; Gagliardi, Chiara

    2015-01-01

    In this study both the matching and developmental trajectories approaches were used to clarify questions that remain open in the literature on facial emotion recognition in Williams syndrome (WS) and Down syndrome (DS). The matching approach showed that individuals with WS or DS exhibit neither proficiency for the expression of happiness nor specific impairments for negative emotions. Instead, they present the same pattern of emotion recognition as typically developing (TD) individuals. Thus, the better performance on the recognition of positive compared to negative emotions usually reported in WS and DS is not specific of these populations but seems to represent a typical pattern. Prior studies based on the matching approach suggested that the development of facial emotion recognition is delayed in WS and atypical in DS. Nevertheless, and even though performance levels were lower in DS than in WS, the developmental trajectories approach used in this study evidenced that not only individuals with DS but also those with WS present atypical development in facial emotion recognition. Unlike in the TD participants, where developmental changes were observed along with age, in the WS and DS groups, the development of facial emotion recognition was static. Both individuals with WS and those with DS reached an early maximum developmental level due to cognitive constraints.

  19. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  20. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    PubMed Central

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  1. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  2. Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.

    PubMed

    Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A

    2017-12-01

    To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.

  3. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    PubMed

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the

  4. Lying about facial recognition: an fMRI study.

    PubMed

    Bhatt, S; Mbwana, J; Adeyemo, A; Sawyer, A; Hailu, A; Vanmeter, J

    2009-03-01

    Novel deception detection techniques have been in creation for centuries. Functional magnetic resonance imaging (fMRI) is a neuroscience technology that non-invasively measures brain activity associated with behavior and cognition. A number of investigators have explored the utilization and efficiency of fMRI in deception detection. In this study, 18 subjects were instructed during an fMRI "line-up" task to either conceal (lie) or reveal (truth) the identities of individuals seen in study sets in order to determine the neural correlates of intentionally misidentifying previously known faces (lying about recognition). A repeated measures ANOVA (lie vs. truth and familiar vs. unfamiliar) and two paired t-tests (familiar vs. unfamiliar and familiar lie vs. familiar truth) revealed areas of activation associated with deception in the right MGF, red nucleus, IFG, SMG, SFG (with ACC), DLPFC, and bilateral precuneus. The areas activated in the present study may be involved in the suppression of truth, working and visuospatial memories, and imagery when providing misleading (deceptive) responses to facial identification prompts in the form of a "line-up".

  5. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  6. CACNA1C risk variant affects facial emotion recognition in healthy individuals.

    PubMed

    Nieratschker, Vanessa; Brückmann, Christof; Plewnia, Christian

    2015-11-27

    Recognition and correct interpretation of facial emotion is essential for social interaction and communication. Previous studies have shown that impairments in this cognitive domain are common features of several psychiatric disorders. Recent association studies identified CACNA1C as one of the most promising genetic risk factors for psychiatric disorders and previous evidence suggests that the most replicated risk variant in CACNA1C (rs1006737) is affecting emotion recognition and processing. However, studies investigating the influence of rs1006737 on this intermediate phenotype in healthy subjects at the behavioral level are largely missing to date. Here, we applied the "Reading the Mind in the Eyes" test, a facial emotion recognition paradigm in a cohort of 92 healthy individuals to address this question. Whereas accuracy was not affected by genotype, CACNA1C rs1006737 risk-allele carries (AA/AG) showed significantly slower mean response times compared to individuals homozygous for the G-allele, indicating that healthy risk-allele carriers require more information to correctly identify a facial emotion. Our study is the first to provide evidence for an impairing behavioral effect of the CACNA1C risk variant rs1006737 on facial emotion recognition in healthy individuals and adds to the growing number of studies pointing towards CACNA1C as affecting intermediate phenotypes of psychiatric disorders.

  7. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory.

  8. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    PubMed Central

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A.; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual

  9. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors.

    PubMed

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims' recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims' performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual

  10. Facial emotion recognition deficits in relatives of children with autism are not associated with 5HTTLPR.

    PubMed

    Neves, Maila de Castro Lourenço das; Tremeau, Fabien; Nicolato, Rodrigo; Lauar, Hélio; Romano-Silva, Marco Aurélio; Correa, Humberto

    2011-09-01

    A large body of evidence suggests that several aspects of face processing are impaired in autism and that this impairment might be hereditary. This study was aimed at assessing facial emotion recognition in parents of children with autism and its associations with a functional polymorphism of the serotonin transporter (5HTTLPR). We evaluated 40 parents of children with autism and 41 healthy controls. All participants were administered the Penn Emotion Recognition Test (ER40) and were genotyped for 5HTTLPR. Our study showed that parents of children with autism performed worse in the facial emotion recognition test than controls. Analyses of error patterns showed that parents of children with autism over-attributed neutral to emotional faces. We found evidence that 5HTTLPR polymorphism did not influence the performance in the Penn Emotion Recognition Test, but that it may determine different error patterns. Facial emotion recognition deficits are more common in first-degree relatives of autistic patients than in the general population, suggesting that facial emotion recognition is a candidate endophenotype for autism.

  11. Modulation of α power and functional connectivity during facial affect recognition.

    PubMed

    Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan

    2013-04-03

    Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.

  12. Deficits in Facial Emotion Recognition in Schizophrenia: A Replication Study with Korean Subjects

    PubMed Central

    Lee, Seung Jae; Lee, Hae-Kook; Kweon, Yong-Sil; Lee, Chung Tai

    2010-01-01

    Objective We investigated the deficit in the recognition of facial emotions in a sample of medicated, stable Korean patients with schizophrenia using Korean facial emotion pictures and examined whether the possible impairments would corroborate previous findings. Methods Fifty-five patients with schizophrenia and 62 healthy control subjects completed the Facial Affect Identification Test with a new set of 44 colored photographs of Korean faces including the six universal emotions as well as neutral faces. Results Korean patients with schizophrenia showed impairments in the recognition of sad, fearful, and angry faces [F(1,114)=6.26, p=0.014; F(1,114)=6.18, p=0.014; F(1,114)=9.28, p=0.003, respectively], but their accuracy was no different from that of controls in the recognition of happy emotions. Higher total and three subscale scores of the Positive and Negative Syndrome Scale (PANSS) correlated with worse performance on both angry and neutral faces. Correct responses on happy stimuli were negatively correlated with negative symptom scores of the PANSS. Patients with schizophrenia also exhibited different patterns of misidentification relative to normal controls. Conclusion These findings were consistent with previous studies carried out with different ethnic groups, suggesting cross-cultural similarities in facial recognition impairment in schizophrenia. PMID:21253414

  13. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  14. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    PubMed

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  15. Individual differences in the recognition of facial expressions: an event-related potentials study.

    PubMed

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  16. The familial basis of facial emotion recognition deficits in adolescents with conduct disorder and their unaffected relatives.

    PubMed

    Sully, K; Sonuga-Barke, E J S; Fairchild, G

    2015-07-01

    There is accumulating evidence of impairments in facial emotion recognition in adolescents with conduct disorder (CD). However, the majority of studies in this area have only been able to demonstrate an association, rather than a causal link, between emotion recognition deficits and CD. To move closer towards understanding the causal pathways linking emotion recognition problems with CD, we studied emotion recognition in the unaffected first-degree relatives of CD probands, as well as those with a diagnosis of CD. Using a family-based design, we investigated facial emotion recognition in probands with CD (n = 43), their unaffected relatives (n = 21), and healthy controls (n = 38). We used the Emotion Hexagon task, an alternative forced-choice task using morphed facial expressions depicting the six primary emotions, to assess facial emotion recognition accuracy. Relative to controls, the CD group showed impaired recognition of anger, fear, happiness, sadness and surprise (all p < 0.005). Similar to probands with CD, unaffected relatives showed deficits in anger and happiness recognition relative to controls (all p < 0.008), with a trend toward a deficit in fear recognition. There were no significant differences in performance between the CD probands and the unaffected relatives following correction for multiple comparisons. These results suggest that facial emotion recognition deficits are present in adolescents who are at increased familial risk for developing antisocial behaviour, as well as those who have already developed CD. Consequently, impaired emotion recognition appears to be a viable familial risk marker or candidate endophenotype for CD.

  17. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    PubMed

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Violent media consumption and the recognition of dynamic facial expressions.

    PubMed

    Kirsh, Steven J; Mounts, Jeffrey R W; Olczak, Paul V

    2006-05-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph. Results indicated that, independent of trait aggressiveness, participants high in violent media consumption responded slower to depictions of happiness and faster to depictions of anger than participants low in violent media consumption. Implications of these findings are discussed with respect to current models of aggressive behavior.

  19. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  20. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition

  1. Does Facial Resemblance Enhance Cooperation?

    PubMed Central

    Giang, Trang; Bell, Raoul; Buchner, Axel

    2012-01-01

    Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces). A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system. PMID:23094095

  2. An Electrophysiological Signature of Unconscious Recognition Memory

    PubMed Central

    Voss, Joel L.; Paller, Ken A.

    2009-01-01

    Contradicting the common assumption that accurate recognition reflects explicit-memory processing, we describe evidence for recognition lacking two hallmark explicit-memory features: awareness of memory retrieval and facilitation by attentive encoding. Kaleidoscope images were encoded in conjunction with an attentional diversion and subsequently recognized more accurately than those encoded without diversion. Confidence in recognition was superior following attentive encoding, though recognition was remarkably accurate when people claimed to be unaware of memory retrieval. This “implicit recognition” was associated with frontal-occipital negative brain potentials at 200-400 ms post-stimulus-onset, which were spatially and temporally distinct from positive brain potentials corresponding to explicit recollection and familiarity. This dissociation between behavioral and electrophysiological characteristics of “implicit recognition” versus explicit recognition indicates that a neurocognitive mechanism with properties similar to those that produce implicit memory can be operative in standard recognition tests. People can accurately discriminate repeat stimuli from new stimuli without necessarily knowing it. PMID:19198606

  3. Investigation of facial emotion recognition, alexithymia, and levels of anxiety and depression in patients with somatic symptoms and related disorders

    PubMed Central

    Öztürk, Ahmet; Kiliç, Alperen; Deveci, Erdem; Kirpinar, İsmet

    2016-01-01

    Background The concept of facial emotion recognition is well established in various neuropsychiatric disorders. Although emotional disturbances are strongly associated with somatoform disorders, there are a restricted number of studies that have investigated facial emotion recognition in somatoform disorders. Furthermore, there have been no studies that have regarded this issue using the new diagnostic criteria for somatoform disorders as somatic symptoms and related disorders (SSD). In this study, we aimed to compare the factors of facial emotion recognition between patients with SSD and age- and sex-matched healthy controls (HC) and to retest and investigate the factors of facial emotion recognition using the new criteria for SSD. Patients and methods After applying the inclusion and exclusion criteria, 54 patients who were diagnosed with SSD according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria and 46 age- and sex-matched HC were selected to participate in the present study. Facial emotion recognition, alexithymia, and the status of anxiety and depression were compared between the groups. Results Patients with SSD had significantly decreased scores of facial emotion for fear faces, disgust faces, and neutral faces compared with age- and sex-matched HC (t=−2.88, P=0.005; t=−2.86, P=0.005; and t=−2.56, P=0.009, respectively). After eliminating the effects of alexithymia and depressive and anxious states, the groups were found to be similar in terms of their responses to facial emotion and mean reaction time to facial emotions. Discussion Although there have been limited numbers of studies that have examined the recognition of facial emotion in patients with somatoform disorders, our study is the first to investigate facial recognition in patients with SSD diagnosed according to the DSM-5 criteria. Recognition of facial emotion was found to be disturbed in patients with SSD. However, our findings suggest that

  4. Bidirectional Modulation of Recognition Memory

    PubMed Central

    Ho, Jonathan W.; Poeta, Devon L.; Jacobson, Tara K.; Zolnik, Timothy A.; Neske, Garrett T.; Connors, Barry W.

    2015-01-01

    Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30–40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30–40 Hz was not effective in increasing exploration of novel images. Stimulation at 10–15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally. SIGNIFICANCE STATEMENT Recognition of novelty and familiarity are important for learning, memory, and decision making. Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects, but how novelty and familiarity are encoded and transmitted in the brain is not known. Perirhinal neurons respond to novelty and familiarity by changing firing rates, but recent work suggests that brain oscillations may also be important for recognition. In this study, we showed that

  5. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    PubMed Central

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and

  6. Age Deficits in Facial Affect Recognition: The Influence of Dynamic Cues.

    PubMed

    Grainger, Sarah A; Henry, Julie D; Phillips, Louise H; Vanman, Eric J; Allen, Roy

    2017-07-01

    Older adults have difficulties in identifying most facial expressions of emotion. However, most aging studies have presented static photographs of intense expressions, whereas in everyday experience people see emotions that develop and change. The present study was designed to assess whether age-related difficulties with emotion recognition are reduced when more ecologically valid (i.e., dynamic) stimuli are used. We examined the effect of stimuli format (i.e., static vs. dynamic) on facial affect recognition in two separate studies that included independent samples and distinct stimuli sets. In addition to younger and older participants, a middle-aged group was included in Study 1 and eye gaze patterns were assessed in Study 2. Across both studies, older adults performed worse than younger adults on measures of facial affect recognition. In Study 1, older and-middle aged adults benefited from dynamic stimuli, but only when the emotional displays were subtle. Younger adults gazed more at the eye region of the face relative to older adults (Study 2), but dynamic presentation increased attention towards the eye region for younger adults only. Together, these studies provide important and novel insights into the specific circumstances in which older adults may be expected to experience difficulties in perceiving facial emotions. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. In-the-wild facial expression recognition in extreme poses

    NASA Astrophysics Data System (ADS)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  8. The relationship between facial emotion recognition and executive functions in first-episode patients with schizophrenia and their siblings.

    PubMed

    Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng

    2015-10-08

    Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P < .05). There were no correlations between executive functions and facial emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.

  9. Interference with facial emotion recognition by verbal but not visual loads.

    PubMed

    Reed, Phil; Steed, Ian

    2015-12-01

    The ability to recognize emotions through facial characteristics is critical for social functioning, but is often impaired in those with a developmental or intellectual disability. The current experiments explored the degree to which interfering with the processing capacities of typically-developing individuals would produce a similar inability to recognize emotions through the facial elements of faces displaying particular emotions. It was found that increasing the cognitive load (in an attempt to model learning impairments in a typically developing population) produced deficits in correctly identifying emotions from facial elements. However, this effect was much more pronounced when using a concurrent verbal task than when employing a concurrent visual task, suggesting that there is a substantial verbal element to the labeling and subsequent recognition of emotions. This concurs with previous work conducted with those with developmental disabilities that suggests emotion recognition deficits are connected with language deficits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Orienting to face expression during encoding improves men's recognition of own gender faces.

    PubMed

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Sleep Enhances Explicit Recollection in Recognition Memory

    ERIC Educational Resources Information Center

    Drosopoulos, Spyridon; Wagner, Ullrich; Born, Jan

    2005-01-01

    Recognition memory is considered to be supported by two different memory processes, i.e., the explicit recollection of information about a previous event and an implicit process of recognition based on a contextual sense of familiarity. Both types of memory supposedly rely on distinct memory systems. Sleep is known to enhance the consolidation of…

  12. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Impairment of facial recognition in patients with right cerebral infarcts quantified by computer aided "morphing".

    PubMed Central

    Rösler, A; Lanquillon, S; Dippel, O; Braune, H J

    1997-01-01

    OBJECTIVE: To investigate where facial recognition is located anatomically and to establish whether there is a graded transition from unimpaired recognition of faces to complete prosopagnosia after infarctions in the territory of the middle cerebral artery. METHODS: A computerised morphing program was developed which shows 30 frames gradually changing from portrait photographs of unfamiliar persons to those of well known persons. With a standardised protocol, 31 patients with right and left sided infarctions in the territory of the middle cerebral artery and an age and sex matched control group were compared by non-parametric tests. RESULTS AND CONCLUSION: Facial recognition in patients with right sided lesions was significantly impaired compared with controls and with patients with left sided lesions. A graded impairment in facial recognition in patients with right sided ischaemic infarcts in the territory of the middle cerebral artery seems to exist. Images PMID:9069481

  14. Autonomic imbalance is associated with reduced facial recognition in somatoform disorders.

    PubMed

    Pollatos, Olga; Herbert, Beate M; Wankner, Sarah; Dietel, Anja; Wachsmuth, Cornelia; Henningsen, Peter; Sack, Martin

    2011-10-01

    Somatoform disorders are characterized by the presence of multiple somatic symptoms. While the accuracy of perceiving bodily signal (interoceptive awareness) is only sparely investigated in somatoform disorders, recent research has associated autonomic imbalance with cognitive and emotional difficulties in stress-related diseases. This study aimed to investigate how sympathovagal reactivity interacts with performance in recognizing emotions in faces (facial recognition task). Using a facial recognition and appraisal task, skin conductance levels (SCLs), heart rate (HR) and heart rate variability (HRV) were assessed in 26 somatoform patients and compared to healthy controls. Interoceptive awareness was assessed by a heartbeat detection task. We found evidence for a sympathovagal imbalance in somatoform disorders characterized by low parasympathetic reactivity during emotional tasks and increased sympathetic activation during baseline. Somatoform patients exhibited a reduced recognition performance for neutral and sad emotional expressions only. Possible confounding variables such as alexithymia, anxiety or depression were taken into account. Interoceptive awareness was reduced in somatoform patients. Our data demonstrate an imbalance in sympathovagal activation in somatoform disorders associated with decreased parasympathetic activation. This might account for difficulties in processing of sad and neutral facial expressions in somatoform patients which might be a pathogenic mechanism for increased everyday vulnerability. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Facial emotion recognition and sleep in mentally disordered patients: A natural experiment in a high security hospital.

    PubMed

    Chu, Simon; McNeill, Kimberley; Ireland, Jane L; Qurashi, Inti

    2015-12-15

    We investigated the relationship between a change in sleep quality and facial emotion recognition accuracy in a group of mentally-disordered inpatients at a secure forensic psychiatric unit. Patients whose sleep improved over time also showed improved facial emotion recognition while patients who showed no sleep improvement showed no change in emotion recognition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. The effect of comorbid depression on facial and prosody emotion recognition in first-episode schizophrenia spectrum.

    PubMed

    Herniman, Sarah E; Allott, Kelly A; Killackey, Eóin; Hester, Robert; Cotton, Sue M

    2017-01-15

    Comorbid depression is common in first-episode schizophrenia spectrum (FES) disorders. Both depression and FES are associated with significant deficits in facial and prosody emotion recognition performance. However, it remains unclear whether people with FES and comorbid depression, compared to those without comorbid depression, have overall poorer emotion recognition, or instead, a different pattern of emotion recognition deficits. The aim of this study was to compare facial and prosody emotion recognition performance between those with and without comorbid depression in FES. This study involved secondary analysis of baseline data from a randomized controlled trial of vocational intervention for young people with first-episode psychosis (N=82; age range: 15-25 years). Those with comorbid depression (n=24) had more accurate recognition of sadness in faces compared to those without comorbid depression. Severity of depressive symptoms was also associated with more accurate recognition of sadness in faces. Such results did not recur for prosody emotion recognition. In addition to the cross-sectional design, limitations of this study include the absence of facial and prosodic recognition of neutral emotions. Findings indicate a mood congruent negative bias in facial emotion recognition in those with comorbid depression and FES, and provide support for cognitive theories of depression that emphasise the role of such biases in the development and maintenance of depression. Longitudinal research is needed to determine whether mood-congruent negative biases are implicated in the development and maintenance of depression in FES, or whether such biases are simply markers of depressed state. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    PubMed Central

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque

    2018-01-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845

  18. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    PubMed

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  19. Recognition Memory for Realistic Synthetic Faces

    PubMed Central

    Yotsumoto, Yuko; Kahana, Michael J.; Wilson, Hugh R.; Sekuler, Robert

    2006-01-01

    A series of experiments examined short-term recognition memory for trios of briefly-presented, synthetic human faces derived from three real human faces. The stimuli were graded series of faces, which differed by varying known amounts from the face of the average female. Faces based on each of the three real faces were transformed so as to lie along orthogonal axes in a 3-D face space. Experiment 1 showed that the synthetic faces' perceptual similarity stucture strongly influenced recognition memory. Results were fit by NEMo, a noisy exemplar model of perceptual recognition memory. The fits revealed that recognition memory was influenced both by the similarity of the probe to series items, and by the similarities among the series items themselves. Non-metric multi-dimensional scaling (MDS) showed that faces' perceptual representations largely preserved the 3-D space in which the face stimuli were arrayed. NEMo gave a better account of the results when similarity was defined as perceptual, MDS similarity rather than physical proximity of one face to another. Experiment 2 confirmed the importance of within-list homogeneity directly, without mediation of a model. We discuss the affinities and differences between visual memory for synthetic faces and memory for simpler stimuli. PMID:17948069

  20. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  1. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    PubMed

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  2. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    PubMed Central

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  3. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  4. Are there differential deficits in facial emotion recognition between paranoid and non-paranoid schizophrenia? A signal detection analysis.

    PubMed

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2013-10-30

    This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Facial Emotion Recognition by Persons with Mental Retardation: A Review of the Experimental Literature.

    ERIC Educational Resources Information Center

    Rojahn, Johannes; And Others

    1995-01-01

    This literature review discusses 21 studies on facial emotion recognition by persons with mental retardation in terms of methodological characteristics, stimulus material, salient variables and their relation to recognition tasks, and emotion recognition deficits in mental retardation. A table provides comparative data on all 21 studies. (DB)

  6. Looking like a criminal: stereotypical black facial features promote face source memory error.

    PubMed

    Kleider, Heather M; Cavrak, Sarah E; Knuycky, Leslie R

    2012-11-01

    The present studies tested whether African American face type (stereotypical or nonstereotypical) facilitated stereotype-consistent categorization, and whether that categorization influenced memory accuracy and errors. Previous studies have shown that stereotypically Black features are associated with crime and violence (e.g., Blair, Judd, & Chapleau Psychological Science 15:674-679, 2004; Blair, Judd, & Fallman Journal of Personality and Social Psychology 87:763-778, 2004; Blair, Judd, Sadler, & Jenkins Journal of Personality and Social Psychology 83:5-252002); here, we extended this finding to investigate whether there is a bias toward remembering and recategorizing stereotypical faces as criminals. Using category labels, consistent (or inconsistent) with race-based expectations, we tested whether face recognition and recategorization were driven by the similarity between a target's facial features and a stereotyped category (i.e., stereotypical Black faces associated with crime/violence). The results revealed that stereotypical faces were associated more often with a stereotype-consistent label (Study 1), were remembered and correctly recategorized as criminals (Studies 2-4), and were miscategorized as criminals when memory failed. These effects occurred regardless of race or gender. Together, these findings suggest that face types have strong category associations that can promote stereotype-motivated recognition errors. Implications for eyewitness accuracy are discussed.

  7. Developmental prosopagnosia and the Benton Facial Recognition Test.

    PubMed

    Duchaine, Bradley C; Nakayama, Ken

    2004-04-13

    The Benton Facial Recognition Test is used for clinical and research purposes, but evidence suggests that it is possible to pass the test with impaired face discrimination abilities. The authors tested 11 patients with developmental prosopagnosia using this test, and a majority scored in the normal range. Consequently, scores in the normal range should be interpreted cautiously, and testing should always be supplemented by other face tests.

  8. Pervasive influence of idiosyncratic associative biases during facial emotion recognition.

    PubMed

    El Zein, Marwa; Wyart, Valentin; Grèzes, Julie

    2018-06-11

    Facial morphology has been shown to influence perceptual judgments of emotion in a way that is shared across human observers. Here we demonstrate that these shared associations between facial morphology and emotion coexist with strong variations unique to each human observer. Interestingly, a large part of these idiosyncratic associations does not vary on short time scales, emerging from stable inter-individual differences in the way facial morphological features influence emotion recognition. Computational modelling of decision-making and neural recordings of electrical brain activity revealed that both shared and idiosyncratic face-emotion associations operate through a common biasing mechanism rather than an increased sensitivity to face-associated emotions. Together, these findings emphasize the underestimated influence of idiosyncrasies on core social judgments and identify their neuro-computational signatures.

  9. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    ERIC Educational Resources Information Center

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  10. Facial recognition: a cognitive study of elderly dementia patients and normal older adults.

    PubMed

    Zandi, T; Cooper, M; Garrison, L

    1992-01-01

    Dementia patients' and normal elderlies' recognition of familiar, ordinary emotional and facial expressions was tested. In three conditions subjects were required to name the emotions depicted in pictures and to produce them while presented with the verbal labels of the expressions. The dementia patients' best performance occurred when they had access to the verbal labels while viewing the pictures. The major deficiency in facial recognition was found to be dysnomia related. Findings of this study suggest that the connection between the gnostic units of expression and the gnostic units of verbal labeling is not impaired significantly among the dementia patients.

  11. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  12. Relationship between individual differences in functional connectivity and facial-emotion recognition abilities in adults with traumatic brain injury.

    PubMed

    Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C

    2017-01-01

    Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.

  13. Deficits in facial affect recognition among antisocial populations: a meta-analysis.

    PubMed

    Marsh, Abigail A; Blair, R J R

    2008-01-01

    Individuals with disorders marked by antisocial behavior frequently show deficits in recognizing displays of facial affect. Antisociality may be associated with specific deficits in identifying fearful expressions, which would implicate dysfunction in neural structures that subserve fearful expression processing. A meta-analysis of 20 studies was conducted to assess: (a) if antisocial populations show any consistent deficits in recognizing six emotional expressions; (b) beyond any generalized impairment, whether specific fear recognition deficits are apparent; and (c) if deficits in fear recognition are a function of task difficulty. Results show a robust link between antisocial behavior and specific deficits in recognizing fearful expressions. This impairment cannot be attributed solely to task difficulty. These results suggest dysfunction among antisocial individuals in specified neural substrates, namely the amygdala, involved in processing fearful facial affect.

  14. Violent video game players and non-players differ on facial emotion recognition.

    PubMed

    Diaz, Ruth L; Wong, Ulric; Hodgins, David C; Chiu, Carina G; Goghari, Vina M

    2016-01-01

    Violent video game playing has been associated with both positive and negative effects on cognition. We examined whether playing two or more hours of violent video games a day, compared to not playing video games, was associated with a different pattern of recognition of five facial emotions, while controlling for general perceptual and cognitive differences that might also occur. Undergraduate students were categorized as violent video game players (n = 83) or non-gamers (n = 69) and completed a facial recognition task, consisting of an emotion recognition condition and a control condition of gender recognition. Additionally, participants completed questionnaires assessing their video game and media consumption, aggression, and mood. Violent video game players recognized fearful faces both more accurately and quickly and disgusted faces less accurately than non-gamers. Desensitization to violence, constant exposure to fear and anxiety during game playing, and the habituation to unpleasant stimuli, are possible mechanisms that could explain these results. Future research should evaluate the effects of violent video game playing on emotion processing and social cognition more broadly. © 2015 Wiley Periodicals, Inc.

  15. The different faces of one's self: an fMRI study into the recognition of current and past self-facial appearances.

    PubMed

    Apps, Matthew A J; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2012-11-15

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one's own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one's face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one's self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one's own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    PubMed

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Two processes support visual recognition memory in rhesus monkeys.

    PubMed

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-11-29

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans.

  18. fMRI characterization of visual working memory recognition.

    PubMed

    Rahm, Benjamin; Kaiser, Jochen; Unterrainer, Josef M; Simon, Juliane; Bledowski, Christoph

    2014-04-15

    Encoding and maintenance of information in visual working memory have been extensively studied, highlighting the crucial and capacity-limiting role of fronto-parietal regions. In contrast, the neural basis of recognition in visual working memory has remained largely unspecified. Cognitive models suggest that recognition relies on a matching process that compares sensory information with the mental representations held in memory. To characterize the neural basis of recognition we varied both the need for recognition and the degree of similarity between the probe item and the memory contents, while independently manipulating memory load to produce load-related fronto-parietal activations. fMRI revealed a fractionation of working memory functions across four distributed networks. First, fronto-parietal regions were activated independent of the need for recognition. Second, anterior parts of load-related parietal regions contributed to recognition but their activations were independent of the difficulty of matching in terms of sample-probe similarity. These results argue against a key role of the fronto-parietal attention network in recognition. Rather the third group of regions including bilateral temporo-parietal junction, posterior cingulate cortex and superior frontal sulcus reflected demands on matching both in terms of sample-probe-similarity and the number of items to be compared. Also, fourth, bilateral motor regions and right superior parietal cortex showed higher activation when matching provided clear evidence for a decision. Together, the segregation between the well-known fronto-parietal activations attributed to attentional operations in working memory from those regions involved in matching supports the theoretical view of separable attentional and mnemonic contributions to working memory. Yet, the close theoretical and empirical correspondence to perceptual decision making may call for an explicit consideration of decision making mechanisms in

  19. Utilizing Current Commercial-off-the-Shelf Facial Recognition and Public Live Video Streaming to Enhance National Security

    DTIC Science & Technology

    2014-09-01

    biometrics technologies. 14. SUBJECT TERMS Facial recognition, systems engineering, live video streaming, security cameras, national security ...national security by sharing biometric facial recognition data in real-time utilizing infrastructures currently in place. It should be noted that the...9/11),law enforcement (LE) and Intelligence community (IC)authorities responsible for protecting citizens from threats against national security

  20. The Reliability of Facial Recognition of Deceased Persons on Photographs.

    PubMed

    Caplova, Zuzana; Obertova, Zuzana; Gibelli, Daniele M; Mazzarelli, Debora; Fracasso, Tony; Vanezis, Peter; Sforza, Chiarella; Cattaneo, Cristina

    2017-09-01

    In humanitarian emergencies, such as the current deceased migrants in the Mediterranean, antemortem documentation needed for identification may be limited. The use of visual identification has been previously reported in cases of mass disasters such as Thai tsunami. This pilot study explores the ability of observers to match unfamiliar faces of living and dead persons and whether facial morphology can be used for identification. A questionnaire was given to 41 students and five professionals in the field of forensic identification with the task to choose whether a facial photograph corresponds to one of the five photographs in a lineup and to identify the most useful features used for recognition. Although the overall recognition score did not significantly differ between professionals and students, the median scores of 78.1% and 80.0%, respectively, were too low to consider this method as a reliable identification method and thus needs to be supported by other means. © 2017 American Academy of Forensic Sciences.

  1. Development of emotional facial recognition in late childhood and adolescence.

    PubMed

    Thomas, Laura A; De Bellis, Michael D; Graham, Reiko; LaBar, Kevin S

    2007-09-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents and adults on a two-alternative forced-choice discrimination task using morphed faces that varied in emotional content. Actors appeared to pose expressions that changed incrementally along three progressions: neutral-to-fear, neutral-to-anger, and fear-to-anger. Across all three morph types, adults displayed more sensitivity to subtle changes in emotional expression than children and adolescents. Fear morphs and fear-to-anger blends showed a linear developmental trajectory, whereas anger morphs showed a quadratic trend, increasing sharply from adolescents to adults. The results provide evidence for late developmental changes in emotional expression recognition with some specificity in the time course for distinct emotions.

  2. A New Method of Facial Expression Recognition Based on SPE Plus SVM

    NASA Astrophysics Data System (ADS)

    Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei

    A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.

  3. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    PubMed Central

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  4. Abnormal Facial Emotion Recognition in Depression: Serial Testing in an Ultra-Rapid-Cycling Patient.

    ERIC Educational Resources Information Center

    George, Mark S.; Huggins, Teresa; McDermut, Wilson; Parekh, Priti I.; Rubinow, David; Post, Robert M.

    1998-01-01

    Mood disorder subjects have a selective deficit in recognizing human facial emotion. Whether the facial emotion recognition errors persist during normal mood states (i.e., are state vs. trait dependent) was studied in one male bipolar II patient. Results of five sessions are presented and discussed. (Author/EMK)

  5. The role of the hippocampus in recognition memory.

    PubMed

    Bird, Chris M

    2017-08-01

    Many theories of declarative memory propose that it is supported by partially separable processes underpinned by different brain structures. The hippocampus plays a critical role in binding together item and contextual information together and processing the relationships between individual items. By contrast, the processing of individual items and their later recognition can be supported by extrahippocampal regions of the medial temporal lobes (MTL), particularly when recognition is based on feelings of familiarity without the retrieval of any associated information. These theories are domain-general in that "items" might be words, faces, objects, scenes, etc. However, there is mixed evidence that item recognition does not require the hippocampus, or that familiarity-based recognition can be supported by extrahippocampal regions. By contrast, there is compelling evidence that in humans, hippocampal damage does not affect recognition memory for unfamiliar faces, whilst recognition memory for several other stimulus classes is impaired. I propose that regions outside of the hippocampus can support recognition of unfamiliar faces because they are perceived as discrete items and have no prior conceptual associations. Conversely, extrahippocampal processes are inadequate for recognition of items which (a) have been previously experienced, (b) are conceptually meaningful, or (c) are perceived as being comprised of individual elements. This account reconciles findings from primate and human studies of recognition memory. Furthermore, it suggests that while the hippocampus is critical for binding and relational processing, these processes are required for item recognition memory in most situations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Real Time Large Memory Optical Pattern Recognition.

    DTIC Science & Technology

    1984-06-01

    AD-Ri58 023 REAL TIME LARGE MEMORY OPTICAL PATTERN RECOGNITION(U) - h ARMY MISSILE COMMAND REDSTONE ARSENAL AL RESEARCH DIRECTORATE D A GREGORY JUN...TECHNICAL REPORT RR-84-9 Ln REAL TIME LARGE MEMORY OPTICAL PATTERN RECOGNITION Don A. Gregory Research Directorate US Army Missile Laboratory JUNE 1984 L...RR-84-9 , ___/_ _ __ _ __ _ __ _ __"__ _ 4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED Real Time Large Memory Optical Pattern Technical

  7. Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness.

    PubMed

    Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto

    2012-01-01

    Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.

  8. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  9. A Brief Review of Facial Emotion Recognition Based on Visual Information

    PubMed Central

    2018-01-01

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work. PMID:29385749

  10. Two processes support visual recognition memory in rhesus monkeys

    PubMed Central

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-01-01

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans. PMID:22084079

  11. Can corrective feedback improve recognition memory?

    PubMed

    Kantner, Justin; Lindsay, D Stephen

    2010-06-01

    An understanding of the effects of corrective feedback on recognition memory can inform both recognition theory and memory training programs, but few published studies have investigated the issue. Although the evidence to date suggests that feedback does not improve recognition accuracy, few studies have directly examined its effect on sensitivity, and fewer have created conditions that facilitate a feedback advantage by encouraging controlled processing at test. In Experiment 1, null effects of feedback were observed following both deep and shallow encoding of categorized study lists. In Experiment 2, feedback robustly influenced response bias by allowing participants to discern highly uneven base rates of old and new items, but sensitivity remained unaffected. In Experiment 3, a false-memory procedure, feedback failed to attenuate false recognition of critical lures. In Experiment 4, participants were unable to use feedback to learn a simple category rule separating old items from new items, despite the fact that feedback was of substantial benefit in a nearly identical categorization task. The recognition system, despite a documented ability to utilize controlled strategic or inferential decision-making processes, appears largely impenetrable to a benefit of corrective feedback.

  12. Differential amygdala response during facial recognition in patients with schizophrenia: an fMRI study.

    PubMed

    Kosaka, H; Omori, M; Murata, T; Iidaka, T; Yamada, H; Okada, T; Takahashi, T; Sadato, N; Itoh, H; Yonekura, Y; Wada, Y

    2002-09-01

    Human lesion or neuroimaging studies suggest that amygdala is involved in facial emotion recognition. Although impairments in recognition of facial and/or emotional expression have been reported in schizophrenia, there are few neuroimaging studies that have examined differential brain activation during facial recognition between patients with schizophrenia and normal controls. To investigate amygdala responses during facial recognition in schizophrenia, we conducted a functional magnetic resonance imaging (fMRI) study with 12 right-handed medicated patients with schizophrenia and 12 age- and sex-matched healthy controls. The experiment task was a type of emotional intensity judgment task. During the task period, subjects were asked to view happy (or angry/disgusting/sad) and neutral faces simultaneously presented every 3 s and to judge which face was more emotional (positive or negative face discrimination). Imaging data were investigated in voxel-by-voxel basis for single-group analysis and for between-group analysis according to the random effect model using Statistical Parametric Mapping (SPM). No significant difference in task accuracy was found between the schizophrenic and control groups. Positive face discrimination activated the bilateral amygdalae of both controls and schizophrenics, with more prominent activation of the right amygdala shown in the schizophrenic group. Negative face discrimination activated the bilateral amygdalae in the schizophrenic group whereas the right amygdala alone in the control group, although no significant group difference was found. Exaggerated amygdala activation during emotional intensity judgment found in the schizophrenic patients may reflect impaired gating of sensory input containing emotion. Copyright 2002 Elsevier Science B.V.

  13. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    PubMed

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.

  14. The different faces of one’s self: an fMRI study into the recognition of current and past self-facial appearances

    PubMed Central

    Apps, Matthew A. J.; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2013-01-01

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one’s own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one’s face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one’s self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one’s own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. PMID:22940117

  15. Unspoken vowel recognition using facial electromyogram.

    PubMed

    Arjunan, Sridhar P; Kumar, Dinesh K; Yau, Wai C; Weghorn, Hans

    2006-01-01

    The paper aims to identify speech using the facial muscle activity without the audio signals. The paper presents an effective technique that measures the relative muscle activity of the articulatory muscles. Five English vowels were used as recognition variables. This paper reports using moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles to segment the signal and identify the start and end of the utterance. The RMS of the signal between the start and end markers was integrated and normalised. This represented the relative muscle activity of the four muscles. These were classified using back propagation neural network to identify the speech. The technique was successfully used to classify 5 vowels into three classes and was not sensitive to the variation in speed and the style of speaking of the different subjects. The results also show that this technique was suitable for classifying the 5 vowels into 5 classes when trained for each of the subjects. It is suggested that such a technology may be used for the user to give simple unvoiced commands when trained for the specific user.

  16. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Facial recognition of happiness among older adults with active and remitted major depression.

    PubMed

    Shiroma, Paulo R; Thuras, Paul; Johns, Brian; Lim, Kelvin O

    2016-09-30

    Biased emotion processing in depression might be a trait characteristic independent of mood improvement and a vulnerable factor to develop further depressive episodes. This phenomenon of among older adults with depression has not been adequately examined. In a 2-year cross-sectional study, 59 older patients with either active or remitted major depression, or never-depressed, completed a facial emotion recognition task (FERT) to probe perceptual bias of happiness. The results showed that depressed patients, compared with never depressed subjects, had a significant lower sensitivity to identify happiness particularly at moderate intensity of facial stimuli. Patients in remission from a previous major depressive episode but with none or minimal symptoms had similar sensitivity rate to identify happy facial expressions as compared to patients with an active depressive episode. Further studies would be necessary to confirm whether recognition of happy expression reflects a persistent perceptual bias of major depression in older adults. Published by Elsevier Ireland Ltd.

  18. Posteromedial hyperactivation during episodic recognition among people with memory decline: findings from the WRAP study

    PubMed Central

    Nicholas, Christopher R.; Okonkwo, Ozioma C.; Bendlin, Barbara B.; Oh, Jennifer M.; Asthana, Sanjay; Rowley, Howard A.; Hermann, Bruce; Sager, Mark A.

    2014-01-01

    Episodic memory decline is one of the earliest preclinical symptoms of AD, and has been associated with an upregulation in the BOLD response in the prodromal stage (e.g. MCI) of AD. In a previous study, we observed upregulation in cognitively normal (CN) subjects with subclinical episodic memory decline compared to non-decliners. In light of this finding, we sought to determine if a separate cohort of Decliners will show increased brain activation compared to Stable subjects during episodic memory processing, and determine whether the BOLD effect was influenced by cerebral blood flow (CBF) or gray matter volume (GMV). Individuals were classified as a “Decliner” if scores on the Rey Auditory Verbal Learning Test (RAVLT) consistently fell≥1.5 SD below expected intra- or inter-individual levels. FMRI was used to compare activation during a facial recognition memory task in 90 Stable (age=59.1) and 34 Decliner (age=62.1, SD=5.9) CN middle-aged adults and 10 MCI patients (age=72.1, SD= 9.4). Arterial spin labeling and anatomical T1 MRI were used to measure resting CBF and GMV, respectively. Stables and Decliners performed similarly on the episodic recognition memory task and significantly better than MCI patients. Compared to Stables, Decliners showed increased BOLD signal in the left precuneus on the episodic memory task that was not explained by CBF or GMV, familial AD risk factors, or neuropsychological measures. These findings suggest that subtle changes in the BOLD signal reflecting altered neural function may be a relatively early phenomenon associated with memory decline. PMID:25332108

  19. Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2017-06-01

    The aim of this study was to present a straightforward implementation of facial recognition using the Microsoft Kinect v2 sensor for patient identification in a radiotherapy setting. A facial recognition system was created with the Microsoft Kinect v2 using a facial mapping library distributed with the Kinect v2 SDK as a basis for the algorithm. The system extracts 31 fiducial points representing various facial landmarks which are used in both the creation of a reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. ROC curves were plotted to display system performance and identify thresholds for match determination. In addition, system performance as a function of ambient light intensity was tested. Using optimized parameters in the matching algorithm, the sensitivity of the system for 5299 trials was 96.5% and the specificity was 96.7%. The results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a precollected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 s, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants and most robust when consistent ambient light conditions were maintained across both the reference recording session and subsequent real-time identification sessions. A facial recognition system can be implemented for patient identification using the Microsoft Kinect v2 sensor and the distributed SDK. In its present form, the system is accurate-if time consuming

  20. Transfer-appropriate processing in recognition memory: perceptual and conceptual effects on recognition memory depend on task demands.

    PubMed

    Parks, Colleen M

    2013-07-01

    Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which suggest that the extent to which perceptual fluency matters on a recognition test depends in large part on the task demands. A test that recruits perceptual processing for discrimination should show greater perceptual effects and smaller conceptual effects than standard recognition, similar to the pattern of effects found in perceptual implicit memory tasks. This idea was tested in the current experiment by crossing a levels of processing manipulation with a modality manipulation on a series of recognition tests that ranged from conceptual (standard recognition) to very perceptually demanding (a speeded recognition test with degraded stimuli). Results showed that the levels of processing effect decreased and the effect of modality increased when tests were made perceptually demanding. These results support the idea that surface-level features influence performance on recognition tests when they are made salient by the task demands. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. Recognition memory probes affect what is remembered in schizophrenia.

    PubMed

    Schwartz, Barbara L; Parker, Elizabeth S; Rosse, Richard B; Deutsch, Stephen I

    2009-05-15

    Cognitive psychology offers tools to localize the memory processes most vulnerable to disruption in schizophrenia and to identify how patients with schizophrenia best remember. In this research, we used the University of Southern California Repeatable Episodic Memory Test (USC-REMT; Parker, E.S., Landau, S.M., Whipple, S.C., Schwartz, B.L., 2004. Aging, recall, and recognition: A study on the sensitivity of the University of Southern California Repeatable Episodic Memory Test (USC-REMT). Journal of Clinical and Experimental Neuropsychology 26(3), 428-440.) to examine how two different recognition memory probes affect memory performance in patients with schizophrenia and matched controls. Patients with schizophrenia studied equivalent word lists and were tested by yes-no recognition and forced-choice recognition following identical encoding and storage conditions. Compared with controls, patients with schizophrenia were particularly impaired when tested by yes-no recognition relative to forced-choice recognition. Patients had greatest deficits on hits in yes-no recognition but did not exhibit elevated false alarms. The data point to the importance of retrieval processes in schizophrenia, and highlight the need for further research on ways to help patients with schizophrenia access what they have learned.

  2. Predicting the Accuracy of Facial Affect Recognition: The Interaction of Child Maltreatment and Intellectual Functioning

    ERIC Educational Resources Information Center

    Shenk, Chad E.; Putnam, Frank W.; Noll, Jennie G.

    2013-01-01

    Previous research demonstrates that both child maltreatment and intellectual performance contribute uniquely to the accurate identification of facial affect by children and adolescents. The purpose of this study was to extend this research by examining whether child maltreatment affects the accuracy of facial recognition differently at varying…

  3. Pupil size changes during recognition memory.

    PubMed

    Otero, Samantha C; Weekes, Brendan S; Hutton, Samuel B

    2011-10-01

    Pupils dilate to a greater extent when participants view old compared to new items during recognition memory tests. We report three experiments investigating the cognitive processes associated with this pupil old/new effect. Using a remember/know procedure, we found that the effect occurred for old items that were both remembered and known at recognition, although it was attenuated for known compared to remembered items. In Experiment 2, the pupil old/new effect was observed when items were presented acoustically, suggesting the effect does not depend on low-level visual processes. The pupil old/new effect was also greater for items encoded under deep compared to shallow orienting instructions, suggesting it may reflect the strength of the underlying memory trace. Finally, the pupil old/new effect was also found when participants falsely recognized items as being old. We propose that pupils respond to a strength-of-memory signal and suggest that pupillometry provides a useful technique for exploring the underlying mechanisms of recognition memory. Copyright © 2011 Society for Psychophysiological Research.

  4. Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.

    PubMed

    Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L

    2018-01-01

    Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p  = 0.04; d  = 0.34), had increased response bias toward fear ( p  = 0.03), and increased response time to facial emotions as a whole ( p  = 0.02; d  = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.

  5. A voxel-based lesion study on facial emotion recognition after penetrating brain injury

    PubMed Central

    Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan

    2013-01-01

    The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440

  6. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Transfer-Appropriate Processing in Recognition Memory: Perceptual and Conceptual Effects on Recognition Memory Depend on Task Demands

    ERIC Educational Resources Information Center

    Parks, Colleen M.

    2013-01-01

    Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which…

  8. Functional connectivity between amygdala and facial regions involved in recognition of facial threat

    PubMed Central

    Harada, Tokiko; Ruffman, Ted; Sadato, Norihiro; Iidaka, Tetsuya

    2013-01-01

    The recognition of threatening faces is important for making social judgments. For example, threatening facial features of defendants could affect the decisions of jurors during a trial. Previous neuroimaging studies using faces of members of the general public have identified a pivotal role of the amygdala in perceiving threat. This functional magnetic resonance imaging study used face photographs of male prisoners who had been convicted of first-degree murder (MUR) as threatening facial stimuli. We compared the subjective ratings of MUR faces with those of control (CON) faces and examined how they were related to brain activation, particularly, the modulation of the functional connectivity between the amygdala and other brain regions. The MUR faces were perceived to be more threatening than the CON faces. The bilateral amygdala was shown to respond to both MUR and CON faces, but subtraction analysis revealed no significant difference between the two. Functional connectivity analysis indicated that the extent of connectivity between the left amygdala and the face-related regions (i.e. the superior temporal sulcus, inferior temporal gyrus and fusiform gyrus) was correlated with the subjective threat rating for the faces. We have demonstrated that the functional connectivity is modulated by vigilance for threatening facial features. PMID:22156740

  9. Psychopathy and facial emotion recognition ability in patients with bipolar affective disorder with or without delinquent behaviors.

    PubMed

    Demirel, Husrev; Yesilbas, Dilek; Ozver, Ismail; Yuksek, Erhan; Sahin, Feyzi; Aliustaoglu, Suheyla; Emul, Murat

    2014-04-01

    It is well known that patients with bipolar disorder are more prone to violence and have more criminal behaviors than general population. A strong relationship between criminal behavior and inability to empathize and imperceptions to other person's feelings and facial expressions increases the risk of delinquent behaviors. In this study, we aimed to investigate the deficits of facial emotion recognition ability in euthymic bipolar patients who committed an offense and compare with non-delinquent euthymic patients with bipolar disorder. Fifty-five euthymic patients with delinquent behaviors and 54 non-delinquent euthymic bipolar patients as a control group were included in the study. Ekman's Facial Emotion Recognition Test, sociodemographic data, Hare Psychopathy Checklist, Hamilton Depression Rating Scale and Young Mania Rating Scale were applied to both groups. There were no significant differences between case and control groups in the meaning of average age, gender, level of education, mean age onset of disease and suicide attempt (p>0.05). The three types of most committed delinquent behaviors in patients with euthymic bipolar disorder were as follows: injury (30.8%), threat or insult (20%) and homicide (12.7%). The best accurate percentage of identified facial emotion was "happy" (>99%, for both) while the worst misidentified facial emotion was "fear" in both groups (<50%, for both). The total accuracy rate of recognition toward facial emotions was significantly impaired in patients with delinquent behaviors than non-delinquent ones (p<0.05). The accuracy rate of recognizing the fear expressions was significantly worse in the case group than in the control group (p<0.05). In addition, it tended to be worse toward angry facial expressions in criminal euthymic bipolar patients. The response times toward happy, fear, disgusted and angry expressions had been significantly longer in the case group than in the control group (p<0.05). This study is the first

  10. Impact of Social Cognition on Alcohol Dependence Treatment Outcome: Poorer Facial Emotion Recognition Predicts Relapse/Dropout.

    PubMed

    Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang

    2017-12-01

    Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of

  11. Decreased acetylcholine release delays the consolidation of object recognition memory.

    PubMed

    De Jaeger, Xavier; Cammarota, Martín; Prado, Marco A M; Izquierdo, Iván; Prado, Vania F; Pereira, Grace S

    2013-02-01

    Acetylcholine (ACh) is important for different cognitive functions such as learning, memory and attention. The release of ACh depends on its vesicular loading by the vesicular acetylcholine transporter (VAChT). It has been demonstrated that VAChT expression can modulate object recognition memory. However, the role of VAChT expression on object recognition memory persistence still remains to be understood. To address this question we used distinct mouse lines with reduced expression of VAChT, as well as pharmacological manipulations of the cholinergic system. We showed that reduction of cholinergic tone impairs object recognition memory measured at 24h. Surprisingly, object recognition memory, measured at 4 days after training, was impaired by substantial, but not moderate, reduction in VAChT expression. Our results suggest that levels of acetylcholine release strongly modulate object recognition memory consolidation and appear to be of particular importance for memory persistence 4 days after training. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Whole-face procedures for recovering facial images from memory.

    PubMed

    Frowd, Charlie D; Skelton, Faye; Hepton, Gemma; Holden, Laura; Minahil, Simra; Pitchford, Melanie; McIntyre, Alex; Brown, Charity; Hancock, Peter J B

    2013-06-01

    Research has indicated that traditional methods for accessing facial memories usually yield unidentifiable images. Recent research, however, has made important improvements in this area to the witness interview, method used for constructing the face and recognition of finished composites. Here, we investigated whether three of these improvements would produce even-more recognisable images when used in conjunction with each other. The techniques are holistic in nature: they involve processes which operate on an entire face. Forty participants first inspected an unfamiliar target face. Nominally 24h later, they were interviewed using a standard type of cognitive interview (CI) to recall the appearance of the target, or an enhanced 'holistic' interview where the CI was followed by procedures for focussing on the target's character. Participants then constructed a composite using EvoFIT, a recognition-type system that requires repeatedly selecting items from face arrays, with 'breeding', to 'evolve' a composite. They either saw faces in these arrays with blurred external features, or an enhanced method where these faces were presented with masked external features. Then, further participants attempted to name the composites, first by looking at the face front-on, the normal method, and then for a second time by looking at the face side-on, which research demonstrates facilitates recognition. All techniques improved correct naming on their own, but together promoted highly-recognisable composites with mean naming at 74% correct. The implication is that these techniques, if used together by practitioners, should substantially increase the detection of suspects using this forensic method of person identification. Copyright © 2013 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Influence of gender in the recognition of basic facial expressions: A critical literature review

    PubMed Central

    Forni-Santos, Larissa; Osório, Flávia L

    2015-01-01

    AIM: To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. METHODS: We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. RESULTS: In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. CONCLUSION: The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer’s gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation. PMID:26425447

  14. Cognitive deficits after aneurysmal and angiographically negative subarachnoid hemorrhage: Memory, attention, executive functioning, and emotion recognition.

    PubMed

    Buunk, Anne M; Groen, Rob J M; Veenstra, Wencke S; Metzemaekers, Jan D M; van der Hoeven, Johannes H; van Dijk, J Marc C; Spikman, Jacoba M

    2016-11-01

    The authors' aim was to investigate cognitive outcome in patients with aneurysmal and angiographically negative subarachnoid hemorrhage (aSAH and anSAH), by comparing them to healthy controls and to each other. Besides investigating cognitive functions as memory and attention, they focused on higher-order prefrontal functions, namely executive functioning (EF) and emotion recognition. Patients and healthy controls were assessed with tests measuring memory (15 Words Test, Digit Span), attention and processing speed (Trail Making Test A and B), EF (Zoo Map, Letter Fluency, Dysexecutive Questionnaire), and emotion recognition (Facial Expressions of Emotion Stimuli and Tests). Between-groups comparisons of test performances were made. Patients with aSAH scored significantly lower than healthy controls on measures of memory, processing speed, and attention, but anSAH patients did not. In the higher-order prefrontal functions (EF and emotion recognition), aSAH patients were clearly impaired when compared to healthy controls. However, anSAH patients did not perform significantly better than aSAH patients on the majority of the tests. In the subacute phase after SAH, cognitive functions, including the higher-order prefrontal functions EF and emotion recognition, were clearly impaired in aSAH patients. Patients with anSAH did not perform better than aSAH patients, which indicates that these functions may also be affected to some extent in anSAH patients. Considering the importance of these higher-order prefrontal functions for daily life functioning, and following the results of the present study, tests that measure emotion recognition and EF should be part of the standard neuropsychological assessment after SAH. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Comparing Facial Emotional Recognition in Patients with Borderline Personality Disorder and Patients with Schizotypal Personality Disorder with a Normal Group

    PubMed Central

    Farsham, Aida; Abbaslou, Tahereh; Bidaki, Reza; Bozorg, Bonnie

    2017-01-01

    Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD) and schizotypal personality disorder (SPD). The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method: Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants. Discussion: The results of one way ANOVA and Scheffe’s post hoc test analysis revealed significant differences in neuropsychology assessment of facial emotional recognition between BPD and SPD patients with normal group (p = 0/001). A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008). A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(. The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain. PMID:28659980

  16. [Emotion Recognition in Patients with Peripheral Facial Paralysis - A Pilot Study].

    PubMed

    Konnerth, V; Mohr, G; von Piekartz, H

    2016-02-01

    The perception of emotions is an important component in enabling human beings to social interaction in everyday life. Thus, the ability to recognize the emotions of the other one's mime is a key prerequisite for this. The following study aimed at evaluating the ability of subjects with 'peripheral facial paresis' to perceive emotions in healthy individuals. A pilot study was conducted in which 13 people with 'peripheral facial paresis' participated. This assessment included the 'Facially Expressed Emotion Labeling-Test' (FEEL-Test), the 'Facial-Laterality-Recognition Test' (FLR-Test) and the 'Toronto-Alexithymie-Scale 26' (TAS 26). The results were compared with data of healthy people from other studies. In contrast to healthy patients, the subjects with 'facial paresis' show more difficulties in recognizing basic emotions; however the results are not significant. The participants show a significant lower level of speed (right/left: p<0.001) concerning the perception of facial laterality compared to healthy people. With regard to the alexithymia, the tested group reveals significantly higher results (p<0.001) compared to the unimpaired people. The present pilot study does not prove any impact on this specific patient group's ability to recognize emotions and facial laterality. For future studies the research question should be verified in a larger sample size. © Georg Thieme Verlag KG Stuttgart · New York.

  17. The Anatomy of Non-conscious Recognition Memory.

    PubMed

    Rosenthal, Clive R; Soto, David

    2016-11-01

    Cortical regions as early as primary visual cortex have been implicated in recognition memory. Here, we outline the challenges that this presents for neurobiological accounts of recognition memory. We conclude that understanding the role of early visual cortex (EVC) in this process will require the use of protocols that mask stimuli from visual awareness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  19. Remember-Know and Source Memory Instructions Can Qualitatively Change Old-New Recognition Accuracy: The Modality-Match Effect in Recognition Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Besken, Miri; Peterson, Daniel

    2010-01-01

    Remember-Know (RK) and source memory tasks were designed to elucidate processes underlying memory retrieval. As part of more complex judgments, both tests produce a measure of old-new recognition, which is typically treated as equivalent to that derived from a standard recognition task. The present study demonstrates, however, that recognition…

  20. Effects of pre-experimental knowledge on recognition memory.

    PubMed

    Bird, Chris M; Davies, Rachel A; Ward, Jamie; Burgess, Neil

    2011-01-01

    The influence of pre-experimental autobiographical knowledge on recognition memory was investigated using as memoranda faces that were either personally known or unknown to the participant. Under a dual process theory, such knowledge boosted both recollection- and familiarity-based recognition judgements. Under an unequal variance signal detection model, pre-experimental knowledge increased both the variance and the separation of the target and foil memory strength distributions, boosting hits and correct rejections. Thus, pre-experimental knowledge has profound effects on the multiple, interacting processes that subserve recognition memory, and likely in the neural systems that underpin them.

  1. Achievement motivation and memory: achievement goals differentially influence immediate and delayed remember-know recognition memory.

    PubMed

    Murayama, Kou; Elliot, Andrew J

    2011-10-01

    Little research has been conducted on achievement motivation and memory and, more specifically, on achievement goals and memory. In the present research, the authors conducted two experiments designed to examine the influence of mastery-approach and performance-approach goals on immediate and delayed remember-know recognition memory. The experiments revealed differential effects for achievement goals over time: Performance-approach goals showed higher correct remember responding on an immediate recognition test, whereas mastery-approach goals showed higher correct remember responding on a delayed recognition test. Achievement goals had no influence on overall recognition memory and no consistent influence on know responding across experiments. These findings indicate that it is important to consider quality, not just quantity, in both motivation and memory, when studying relations between these constructs.

  2. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study.

    PubMed

    Nishikawa, Saori; Toshima, Tamotsu; Kobayashi, Masao

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G × E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  3. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study

    PubMed Central

    Nishikawa, Saori

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G×E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links. PMID:26418317

  4. Emotional memory: No source memory without old-new recognition.

    PubMed

    Bell, Raoul; Mieth, Laura; Buchner, Axel

    2017-02-01

    Findings reported in the memory literature suggest that the emotional components of an encoding episode can be dissociated from nonemotional memory. In particular, it has been found that the previous association with threatening events can be retrieved in aversive conditioning even in the absence of item identification. In the present study, we test whether emotional source memory can be independent of item recognition. Participants saw pictures of snakes paired with threatening and nonthreatening context information (poisonousness or nonpoisonousness). In the source memory test, participants were required to remember whether a snake was associated with poisonousness or nonpoisonousness. A simple extension of a well-established multinomial source monitoring model was used to measure source memory for unrecognized items. By using this model, it was possible to assess directly whether participants were able to associate a previously seen snake with poisonousness or nonpoisonousness even if the snake itself was not recognized as having been presented during the experiment. In 3 experiments, emotional source memory was only found for recognized items. While source memory for recognized items differed between emotional and nonemotional information, source memory for unrecognized items was equally absent for emotional and nonemotional information. We conclude that emotional context information is bound to item representations and cannot be retrieved in the absence of item recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    PubMed

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  6. Recognition memory is modulated by visual similarity.

    PubMed

    Yago, Elena; Ishai, Alumit

    2006-06-01

    We used event-related fMRI to test whether recognition memory depends on visual similarity between familiar prototypes and novel exemplars. Subjects memorized portraits, landscapes, and abstract compositions by six painters with a unique style, and later performed a memory recognition task. The prototypes were presented with new exemplars that were either visually similar or dissimilar. Behaviorally, novel, dissimilar items were detected faster and more accurately. We found activation in a distributed cortical network that included face- and object-selective regions in the visual cortex, where familiar prototypes evoked stronger responses than new exemplars; attention-related regions in parietal cortex, where responses elicited by new exemplars were reduced with decreased similarity to the prototypes; and the hippocampus and memory-related regions in parietal and prefrontal cortices, where stronger responses were evoked by the dissimilar exemplars. Our findings suggest that recognition memory is mediated by classification of novel exemplars as a match or a mismatch, based on their visual similarity to familiar prototypes.

  7. Pupil dilation during recognition memory: Isolating unexpected recognition from judgment uncertainty.

    PubMed

    Mill, Ravi D; O'Connor, Akira R; Dobbins, Ian G

    2016-09-01

    Optimally discriminating familiar from novel stimuli demands a decision-making process informed by prior expectations. Here we demonstrate that pupillary dilation (PD) responses during recognition memory decisions are modulated by expectations, and more specifically, that pupil dilation increases for unexpected compared to expected recognition. Furthermore, multi-level modeling demonstrated that the time course of the dilation during each individual trial contains separable early and late dilation components, with the early amplitude capturing unexpected recognition, and the later trailing slope reflecting general judgment uncertainty or effort. This is the first demonstration that the early dilation response during recognition is dependent upon observer expectations and that separate recognition expectation and judgment uncertainty components are present in the dilation time course of every trial. The findings provide novel insights into adaptive memory-linked orienting mechanisms as well as the general cognitive underpinnings of the pupillary index of autonomic nervous system activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Affective theory of mind inferences contextually influence the recognition of emotional facial expressions.

    PubMed

    Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J

    2018-03-14

    The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.

  9. Effects of levodopa-carbidopa-entacapone and smoked cocaine on facial affect recognition in cocaine smokers.

    PubMed

    Bedi, Gillinder; Shiffrin, Laura; Vadhan, Nehal P; Nunes, Edward V; Foltin, Richard W; Bisaga, Adam

    2016-04-01

    In addition to difficulties in daily social functioning, regular cocaine users have decrements in social processing (the cognitive and affective processes underlying social behavior) relative to non-users. Little is known, however, about the effects of clinically-relevant pharmacological agents, such as cocaine and potential treatment medications, on social processing in cocaine users. Such drug effects could potentially alleviate or compound baseline social processing decrements in cocaine abusers. Here, we assessed the individual and combined effects of smoked cocaine and a potential treatment medication, levodopa-carbidopa-entacapone (LCE), on facial emotion recognition in cocaine smokers. Healthy non-treatment-seeking cocaine smokers (N = 14; two female) completed this 11-day inpatient within-subjects study. Participants received LCE (titrated to 400mg/100mg/200mg b.i.d.) for five days with the remaining time on placebo. The order of medication administration was counterbalanced. Facial emotion recognition was measured twice during target LCE dosing and twice on placebo: once without cocaine and once after repeated cocaine doses. LCE increased the response threshold for identification of facial fear, biasing responses away from fear identification. Cocaine had no effect on facial emotion recognition. Results highlight the possibility for candidate pharmacotherapies to have unintended impacts on social processing in cocaine users, potentially exacerbating already existing difficulties in this population. © The Author(s) 2016.

  10. Electrophysiological distinctions between recognition memory with and without awareness

    PubMed Central

    Ko, Philip C.; Duda, Bryant; Hussey, Erin P.; Ally, Brandon A.

    2013-01-01

    The influence of implicit memory representations on explicit recognition may help to explain cases of accurate recognition decisions made with high uncertainty. During a recognition task, implicit memory may enhance the fluency of a test item, biasing decision processes to endorse it as “old”. This model may help explain recognition-without-identification, a remarkable phenomenon in which participants make highly accurate recognition decisions despite the inability to identify the test item. The current study investigated whether recognition-without-identification for pictures elicits a similar pattern of neural activity as other types of accurate recognition decisions made with uncertainty. Further, this study also examined whether recognition-without-identification for pictures could be attained by the use of perceptual and conceptual information from memory. To accomplish this, participants studied pictures and then performed a recognition task under difficult viewing conditions while event-related potentials (ERPs) were recorded. Behavioral results showed that recognition was highly accurate even when test items could not be identified, demonstrating recognition-without identification. The behavioral performance also indicated that recognition-without-identification was mediated by both perceptual and conceptual information, independently of one another. The ERP results showed dramatically different memory related activity during the early 300 to 500 ms epoch for identified items that were studied compared to unidentified items that were studied. Similar to previous work highlighting accurate recognition without retrieval awareness, test items that were not identified, but correctly endorsed as “old,” elicited a negative posterior old/new effect (i.e., N300). In contrast, test items that were identified and correctly endorsed as “old,” elicited the classic positive frontal old/new effect (i.e., FN400). Importantly, both of these effects were

  11. Posteromedial hyperactivation during episodic recognition among people with memory decline: findings from the WRAP study.

    PubMed

    Nicholas, Christopher R; Okonkwo, Ozioma C; Bendlin, Barbara B; Oh, Jennifer M; Asthana, Sanjay; Rowley, Howard A; Hermann, Bruce; Sager, Mark A; Johnson, Sterling C

    2015-12-01

    Episodic memory decline is one of the earliest preclinical symptoms of AD, and has been associated with an upregulation in the BOLD response in the prodromal stage (e.g. MCI) of AD. In a previous study, we observed upregulation in cognitively normal (CN) subjects with subclinical episodic memory decline compared to non-decliners. In light of this finding, we sought to determine if a separate cohort of Decliners will show increased brain activation compared to Stable subjects during episodic memory processing, and determine whether the BOLD effect was influenced by cerebral blood flow (CBF) or gray matter volume (GMV). Individuals were classified as a "Decliner" if scores on the Rey Auditory Verbal Learning Test (RAVLT) consistently fell ≥ 1.5 SD below expected intra- or inter-individual levels. FMRI was used to compare activation during a facial recognition memory task in 90 Stable (age = 59.1) and 34 Decliner (age = 62.1, SD = 5.9) CN middle-aged adults and 10 MCI patients (age = 72.1, SD = 9.4). Arterial spin labeling and anatomical T1 MRI were used to measure resting CBF and GMV, respectively. Stables and Decliners performed similarly on the episodic recognition memory task and significantly better than MCI patients. Compared to Stables, Decliners showed increased BOLD signal in the left precuneus on the episodic memory task that was not explained by CBF or GMV, familial AD risk factors, or neuropsychological measures. These findings suggest that subtle changes in the BOLD signal reflecting altered neural function may be a relatively early phenomenon associated with memory decline.

  12. Facial emotion recognition, socio-occupational functioning and expressed emotions in schizophrenia versus bipolar disorder.

    PubMed

    Thonse, Umesh; Behere, Rishikesh V; Praharaj, Samir Kumar; Sharma, Podila Sathya Venkata Narasimha

    2018-06-01

    Facial emotion recognition deficits have been consistently demonstrated in patients with severe mental disorders. Expressed emotion is found to be an important predictor of relapse. However, the relationship between facial emotion recognition abilities and expressed emotions and its influence on socio-occupational functioning in schizophrenia versus bipolar disorder has not been studied. In this study we examined 91 patients with schizophrenia and 71 with bipolar disorder for psychopathology, socio occupational functioning and emotion recognition abilities. Primary caregivers of 62 patients with schizophrenia and 49 with bipolar disorder were assessed on Family Attitude Questionnaire to assess their expressed emotions. Patients of schizophrenia and bipolar disorder performed similarly on the emotion recognition task. Patients with schizophrenia group experienced higher critical comments and had a poorer socio-occupational functioning as compared to patients with bipolar disorder. Poorer socio-occupational functioning in patients with schizophrenia was significantly associated with greater dissatisfaction in their caregivers. In patients with bipolar disorder, poorer emotion recognition scores significantly correlated with poorer adaptive living skills and greater hostility and dissatisfaction in their caregivers. The findings of our study suggest that emotion recognition abilities in patients with bipolar disorder are associated with negative expressed emotions leading to problems in adaptive living skills. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  14. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  15. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    PubMed

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  16. A facial expression of pax: Assessing children's "recognition" of emotion from faces.

    PubMed

    Nelson, Nicole L; Russell, James A

    2016-01-01

    In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Relaxing decision criteria does not improve recognition memory in amnesic patients.

    PubMed

    Reber, P J; Squire, L R

    1999-05-01

    An important question about the organization of memory is whether information available in non-declarative memory can contribute to performance on tasks of declarative memory. Dorfman, Kihlstrom, Cork, and Misiaszek (1995) described a circumstance in which the phenomenon of priming might benefit recognition memory performance. They reported that patients receiving electroconvulsive therapy improved their recognition performance when they were encouraged to relax their criteria for endorsing test items as familiar. It was suggested that priming improved recognition by making information available about the familiarity of test items. In three experiments, we sought unsuccessfully to reproduce this phenomenon in amnesic patients. In Experiment 3, we reproduced the methods and procedure used by Dorfman et al. but still found no evidence for improved recognition memory following the manipulation of decision criteria. Although negative findings have their own limitations, our findings suggest that the phenomenon reported by Dorfman et al. does not generalize well. Our results agree with several recent findings that suggest that priming is independent of recognition memory and does not contribute to recognition memory scores.

  18. Distinct roles of basal forebrain cholinergic neurons in spatial and object recognition memory.

    PubMed

    Okada, Kana; Nishizawa, Kayo; Kobayashi, Tomoko; Sakata, Shogo; Kobayashi, Kazuto

    2015-08-06

    Recognition memory requires processing of various types of information such as objects and locations. Impairment in recognition memory is a prominent feature of amnesia and a symptom of Alzheimer's disease (AD). Basal forebrain cholinergic neurons contain two major groups, one localized in the medial septum (MS)/vertical diagonal band of Broca (vDB), and the other in the nucleus basalis magnocellularis (NBM). The roles of these cell groups in recognition memory have been debated, and it remains unclear how they contribute to it. We use a genetic cell targeting technique to selectively eliminate cholinergic cell groups and then test spatial and object recognition memory through different behavioural tasks. Eliminating MS/vDB neurons impairs spatial but not object recognition memory in the reference and working memory tasks, whereas NBM elimination undermines only object recognition memory in the working memory task. These impairments are restored by treatment with acetylcholinesterase inhibitors, anti-dementia drugs for AD. Our results highlight that MS/vDB and NBM cholinergic neurons are not only implicated in recognition memory but also have essential roles in different types of recognition memory.

  19. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    PubMed

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Recognition Memory, Self-Other Source Memory, and Theory-of-Mind in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Lind, Sophie E.; Bowler, Dermot M.

    2009-01-01

    This study investigated semantic and episodic memory in autism spectrum disorder (ASD), using a task which assessed recognition and self-other source memory. Children with ASD showed undiminished recognition memory but significantly diminished source memory, relative to age- and verbal ability-matched comparison children. Both children with and…

  1. Memory Performance in Adults with Down Syndrome.

    ERIC Educational Resources Information Center

    Simon, Elliott W.; And Others

    1995-01-01

    The memory abilities of adults (N=20) with Down Syndrome (DS) were compared to subjects matched on age and IQ and on age alone. Three memory tasks were employed: facial recognition, free recall of pictures and words, and cued recall of separate or interacting pictures. In DS individuals, memory was improved primarily by practice and interactive…

  2. Recognition-induced forgetting of faces in visual long-term memory.

    PubMed

    Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M

    2017-10-01

    Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.

  3. Assessing the Utility of a Virtual Environment for Enhancing Facial Affect Recognition in Adolescents with Autism

    ERIC Educational Resources Information Center

    Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan

    2014-01-01

    Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…

  4. Genetic variations in the dopamine system and facial expression recognition in healthy chinese college students.

    PubMed

    Zhu, Bi; Chen, Chuansheng; Moyzis, Robert K; Dong, Qi; Chen, Chunhui; He, Qinghua; Stern, Hal S; Li, He; Li, Jin; Li, Jun; Lessard, Jared; Lin, Chongde

    2012-01-01

    This study investigated the relation between genetic variations in the dopamine system and facial expression recognition. A sample of Chinese college students (n = 478) was given a facial expression recognition task. Subjects were genotyped for 98 loci [96 single-nucleotide polymorphisms (SNPs) and 2 variable number tandem repeats] in 16 genes involved in the dopamine neurotransmitter system, including its 4 subsystems: synthesis (TH, DDC, and DBH), degradation/transport (COMT,MAOA,MAOB, and SLC6A3), receptors (DRD1,DRD2,DRD3,DRD4, and DRD5), and modulation (NTS,NTSR1,NTSR2, and NLN). To quantify the total contributions of the dopamine system to emotion recognition, we used a series of multiple regression models. Permutation analyses were performed to assess the posterior probabilities of obtaining such results. Among the 78 loci that were included in the final analyses (after excluding 12 SNPs that were in high linkage disequilibrium and 8 that were not in Hardy-Weinberg equilibrium), 1 (for fear), 3 (for sadness), 5 (for anger), 13 (for surprise), and 15 (for disgust) loci exhibited main effects on the recognition of facial expressions. Genetic variations in the dopamine system accounted for 3% for fear, 6% for sadness, 7% for anger, 10% for surprise, and 18% for disgust, with the latter surviving a stringent permutation test. Genetic variations in the dopamine system (especially the dopamine synthesis and modulation subsystems) made significant contributions to individual differences in the recognition of disgust faces. Copyright © 2012 S. Karger AG, Basel.

  5. Facial and prosodic emotion recognition in social anxiety disorder.

    PubMed

    Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei

    2017-07-01

    Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.

  6. The role of skin colour in face recognition.

    PubMed

    Bar-Haim, Yair; Saidel, Talia; Yovel, Galit

    2009-01-01

    People have better memory for faces from their own racial group than for faces from other races. It has been suggested that this own-race recognition advantage depends on an initial categorisation of faces into own and other race based on racial markers, resulting in poorer encoding of individual variations in other-race faces. Here, we used a study--test recognition task with stimuli in which the skin colour of African and Caucasian faces was manipulated to produce four categories representing the cross-section between skin colour and facial features. We show that, despite the notion that skin colour plays a major role in categorising faces into own and other-race faces, its effect on face recognition is minor relative to differences across races in facial features.

  7. [Emotional facial recognition difficulties as primary deficit in children with attention deficit hyperactivity disorder: a systematic review].

    PubMed

    Rodrigo-Ruiz, D; Perez-Gonzalez, J C; Cejudo, J

    2017-08-16

    It has recently been warned that children with attention deficit hyperactivity disorder (ADHD) show a deficit in emotional competence and emotional intelligence, specifically in their ability to emotional recognition. A systematic review of the scientific literature in reference to the emotional recognition of facial expressions in children with ADHD is presented in order to establish or rule the existence of emotional deficits as primary dysfunction in this disorder and, where appropriate, the effect size of the differences against normal development or neurotypical children. The results reveal the recent interest in the issue and the lack of information. Although there is no complete agreement, most of the studies show that emotional recognition of facial expressions is affected in children with ADHD, showing them significantly less accurate than children from control groups in recognizing emotions communicated through facial expressions. A part of these studies make comparisons on the recognition of different discrete emotions; having observed that children with ADHD tend to a greater difficulty recognizing negative emotions, especially anger, fear, and disgust. These results have direct implications for the educational and clinical diagnosis of ADHD; and for the educational intervention for children with ADHD, emotional education might entail an advantageous aid.

  8. Recognition memory, self-other source memory, and theory-of-mind in children with autism spectrum disorder.

    PubMed

    Lind, Sophie E; Bowler, Dermot M

    2009-09-01

    This study investigated semantic and episodic memory in autism spectrum disorder (ASD), using a task which assessed recognition and self-other source memory. Children with ASD showed undiminished recognition memory but significantly diminished source memory, relative to age- and verbal ability-matched comparison children. Both children with and without ASD showed an "enactment effect", demonstrating significantly better recognition and source memory for self-performed actions than other-person-performed actions. Within the comparison group, theory-of-mind (ToM) task performance was significantly correlated with source memory, specifically for other-person-performed actions (after statistically controlling for verbal ability). Within the ASD group, ToM task performance was not significantly correlated with source memory (after controlling for verbal ability). Possible explanations for these relations between source memory and ToM are considered.

  9. Test battery for measuring the perception and recognition of facial expressions of emotion

    PubMed Central

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  10. The Role of Recognition Memory in Anaphor Identification

    PubMed Central

    Dopkins, Stephen; Trinh Ngo, Catherine

    2007-01-01

    In studies of anaphor comprehension, the capacity for recognizing a noun in a sentence decreases following the resolution of a repeated-noun anaphor (Gernsbacher, 1989). In studies of recognition memory, the capacity for recognizing a noun in a scrambled sentence decreases following the recognition that another noun has occurred before in the scrambled sentence (Dopkins & Ngo, 2002). The results of the present study suggest that these two phenomena reflect the same recognition memory process. The results suggest further that this is not because participants in studies of anaphor comprehension ignore the discourse properties of the stimulus materials and treat them as lists of words upon which memory tests are to be given. These results suggest that recognition processes play a role in anaphor comprehension and that such processes are in part the means by which repeated-noun anaphors are identified as such. PMID:18163155

  11. Anxiety disorders in adolescence are associated with impaired facial expression recognition to negative valence.

    PubMed

    Jarros, Rafaela Behs; Salum, Giovanni Abrahão; Belem da Silva, Cristiano Tschiedel; Toazza, Rudineia; de Abreu Costa, Marianna; Fumagalli de Salles, Jerusa; Manfro, Gisele Gus

    2012-02-01

    The aim of the present study was to test the ability of adolescents with a current anxiety diagnosis to recognize facial affective expressions, compared to those without an anxiety disorder. Forty cases and 27 controls were selected from a larger cross sectional community sample of adolescents, aged from 10 to 17 years old. Adolescent's facial recognition of six human emotions (sadness, anger, disgust, happy, surprise and fear) and neutral faces was assessed through a facial labeling test using Ekman's Pictures of Facial Affect (POFA). Adolescents with anxiety disorders had a higher mean number of errors in angry faces as compared to controls: 3.1 (SD=1.13) vs. 2.5 (SD=2.5), OR=1.72 (CI95% 1.02 to 2.89; p=0.040). However, they named neutral faces more accurately than adolescents without anxiety diagnosis: 15% of cases vs. 37.1% of controls presented at least one error in neutral faces, OR=3.46 (CI95% 1.02 to 11.7; p=0.047). No differences were found considering other human emotions or on the distribution of errors in each emotional face between the groups. Our findings support an anxiety-mediated influence on the recognition of facial expressions in adolescence. These difficulty in recognizing angry faces and more accuracy in naming neutral faces may lead to misinterpretation of social clues and can explain some aspects of the impairment in social interactions in adolescents with anxiety disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. The role of encoding and attention in facial emotion memory: an EEG investigation.

    PubMed

    Brenner, Colleen A; Rumak, Samuel P; Burns, Amy M N; Kieffaber, Paul D

    2014-09-01

    Facial expressions are encoded via sensory mechanisms, but meaning extraction and salience of these expressions involve cognitive functions. We investigated the time course of sensory encoding and subsequent maintenance in memory via EEG. Twenty-nine healthy participants completed a facial emotion delayed match-to-sample task. P100, N170 and N250 ERPs were measured in response to the first stimulus, and evoked theta power (4-7Hz) was measured during the delay interval. Negative facial expressions produced larger N170 amplitudes and greater theta power early in the delay. N170 amplitude correlated with theta power, however larger N170 amplitude coupled with greater theta power only predicted behavioural performance for one emotion condition (very happy) out of six tested (see Supplemental Data). These findings indicate that the N170 ERP may be sensitive to emotional facial expressions when task demands require encoding and retention of this information. Furthermore, sustained theta activity may represent continued attentional processing that supports short-term memory, especially of negative facial stimuli. Further study is needed to investigate the potential influence of these measures, and their interaction, on behavioural performance. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  13. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    PubMed Central

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (p<.05, r=.44) and more accurate at identifying disgust (p<.05, r=.39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p’s<.05, r’s≥.38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  14. The effects of acute alcohol intoxication on the cognitive mechanisms underlying false facial recognition.

    PubMed

    Colloff, Melissa F; Flowe, Heather D

    2016-06-01

    False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.

  15. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    ERIC Educational Resources Information Center

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  16. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  17. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  18. Dopamine and light: effects on facial emotion recognition.

    PubMed

    Cawley, Elizabeth; Tippler, Maria; Coupland, Nicholas J; Benkelfat, Chawki; Boivin, Diane B; Aan Het Rot, Marije; Leyton, Marco

    2017-09-01

    Bright light can affect mood states and social behaviours. Here, we tested potential interacting effects of light and dopamine on facial emotion recognition. Participants were 32 women with subsyndromal seasonal affective disorder tested in either a bright (3000 lux) or dim light (10 lux) environment. Each participant completed two test days, one following the ingestion of a phenylalanine/tyrosine-deficient mixture and one with a nutritionally balanced control mixture, both administered double blind in a randomised order. Approximately four hours post-ingestion participants completed a self-report measure of mood followed by a facial emotion recognition task. All testing took place between November and March when seasonal symptoms would be present. Following acute phenylalanine/tyrosine depletion (APTD), compared to the nutritionally balanced control mixture, participants in the dim light condition were more accurate at recognising sad faces, less likely to misclassify them, and faster at responding to them, effects that were independent of changes in mood. Effects of APTD on responses to sad faces in the bright light group were less consistent. There were no APTD effects on responses to other emotions, with one exception: a significant light × mixture interaction was seen for the reaction time to fear, but the pattern of effect was not predicted a priori or seen on other measures. Together, the results suggest that the processing of sad emotional stimuli might be greater when dopamine transmission is low. Bright light exposure, used for the treatment of both seasonal and non-seasonal mood disorders, might produce some of its benefits by preventing this effect.

  19. Infant Visual Recognition Memory: Independent Contributions of Speed and Attention.

    ERIC Educational Resources Information Center

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2003-01-01

    Examined contributions of cognitive processing speed, short-term memory capacity, and attention to infant visual recognition memory. Found that infants who showed better attention and faster processing had better recognition memory. Contributions of attention and processing speed were independent of one another and similar at all ages studied--5,…

  20. Sources of interference in item and associative recognition memory.

    PubMed

    Osth, Adam F; Dennis, Simon

    2015-04-01

    A powerful theoretical framework for exploring recognition memory is the global matching framework, in which a cue's memory strength reflects the similarity of the retrieval cues being matched against the contents of memory simultaneously. Contributions at retrieval can be categorized as matches and mismatches to the item and context cues, including the self match (match on item and context), item noise (match on context, mismatch on item), context noise (match on item, mismatch on context), and background noise (mismatch on item and context). We present a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise). The model was fit within a hierarchical Bayesian framework to 10 recognition memory datasets that use manipulations of strength, list length, list strength, word frequency, study-test delay, and stimulus class in item and associative recognition. Estimates of the model parameters revealed at most a small contribution of item noise that varies by stimulus class, with virtually no item noise for single words and scenes. Despite the unpopularity of background noise in recognition memory models, background noise estimates dominated at retrieval across nearly all stimulus classes with the exception of high frequency words, which exhibited equivalent levels of context noise and background noise. These parameter estimates suggest that the majority of interference in recognition memory stems from experiences acquired before the learning episode. (c) 2015 APA, all rights reserved).

  1. Optogenetic Stimulation of Prefrontal Glutamatergic Neurons Enhances Recognition Memory.

    PubMed

    Benn, Abigail; Barker, Gareth R I; Stuart, Sarah A; Roloff, Eva V L; Teschemacher, Anja G; Warburton, E Clea; Robinson, Emma S J

    2016-05-04

    Finding effective cognitive enhancers is a major health challenge; however, modulating glutamatergic neurotransmission has the potential to enhance performance in recognition memory tasks. Previous studies using glutamate receptor antagonists have revealed that the medial prefrontal cortex (mPFC) plays a central role in associative recognition memory. The present study investigates short-term recognition memory using optogenetics to target glutamatergic neurons within the rodent mPFC specifically. Selective stimulation of glutamatergic neurons during the online maintenance of information enhanced associative recognition memory in normal animals. This cognitive enhancing effect was replicated by local infusions of the AMPAkine CX516, but not CX546, which differ in their effects on EPSPs. This suggests that enhancing the amplitude, but not the duration, of excitatory synaptic currents improves memory performance. Increasing glutamate release through infusions of the mGluR7 presynaptic receptor antagonist MMPIP had no effect on performance. These results provide new mechanistic information that could guide the targeting of future cognitive enhancers. Our work suggests that improved associative-recognition memory can be achieved by enhancing endogenous glutamatergic neuronal activity selectively using an optogenetic approach. We build on these observations to recapitulate this effect using drug treatments that enhance the amplitude of EPSPs; however, drugs that alter the duration of the EPSP or increase glutamate release lack efficacy. This suggests that both neural and temporal specificity are needed to achieve cognitive enhancement. Copyright © 2016 Benn et al.

  2. Usage of semantic representations in recognition memory.

    PubMed

    Nishiyama, Ryoji; Hirano, Tetsuji; Ukita, Jun

    2017-11-01

    Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.

  3. The Formation and Stability of Recognition Memory: What Happens Upon Recall?

    PubMed Central

    Davis, Sabrina; Renaudineau, Sophie; Poirier, Roseline; Poucet, Bruno; Save, Etienne; Laroche, Serge

    2010-01-01

    The idea that an already consolidated memory can become destabilized after recall and requires a process of reconsolidation to maintain it for subsequent use has gained much credence over the past decade. Experimental studies in rodents have shown pharmacological, genetic, or injurious manipulation at the time of memory reactivation can disrupt the already consolidated memory. Despite the force of experimental data showing this phenomenon, a number of questions have remained unanswered and no consensus has emerged as to the conditions under which a memory can be disrupted following reactivation. To date most rodent studies of reconsolidation are based on negatively reinforced memories, in particular fear-associated memories, while the storage and stability of forms of memory that do not rely on explicit reinforcement have been less often studied. In this review, we focus on recognition memory, a paradigm widely used in humans to probe declarative memory. We briefly outline recent advances in our understanding of the processes and brain circuits involved in recognition memory and review the evidence that recognition memory can undergo reconsolidation upon reactivation. We also review recent findings suggesting that some molecular mechanisms underlying consolidation of recognition memory are similarly recruited after recall to ensure memory stability, while others are more specifically engaged in consolidation or reconsolidation. Finally, we provide novel data on the role of Rsk2, a mental retardation gene, and of the transcription factor zif268/egr1 in reconsolidation of object-location memory, and offer suggestions as to how assessing the activation of certain molecular mechanisms following recall in recognition memory may help understand the relative importance of different aspects of remodeling or updating long-lasting memories. PMID:21120149

  4. Bayesian Analysis of Recognition Memory: The Case of the List-Length Effect

    ERIC Educational Resources Information Center

    Dennis, Simon; Lee, Michael D.; Kinnell, Angela

    2008-01-01

    Recognition memory experiments are an important source of empirical constraints for theories of memory. Unfortunately, standard methods for analyzing recognition memory data have problems that are often severe enough to prevent clear answers being obtained. A key example is whether longer lists lead to poorer recognition performance. The presence…

  5. ERP Correlates of Recognition Memory in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Massand, Esha; Bowler, Dermot M.; Mottron, Laurent; Hosein, Anthony; Jemel, Boutheina

    2013-01-01

    Recognition memory in autism spectrum disorder (ASD) tends to be undiminished compared to that of typically developing (TD) individuals (Bowler et al. 2007), but it is still unknown whether memory in ASD relies on qualitatively similar or different neurophysiology. We sought to explore the neural activity underlying recognition by employing the…

  6. Associations between facial emotion recognition and young adolescents’ behaviors in bullying

    PubMed Central

    Gini, Gianluca; Altoè, Gianmarco

    2017-01-01

    This study investigated whether different behaviors young adolescents can act during bullying episodes were associated with their ability to recognize morphed facial expressions of the six basic emotions, expressed at high and low intensity. The sample included 117 middle-school students (45.3% girls; mean age = 12.4 years) who filled in a peer nomination questionnaire and individually performed a computerized emotion recognition task. Bayesian generalized mixed-effects models showed a complex picture, in which type and intensity of emotions, students’ behavior and gender interacted in explaining recognition accuracy. Results were discussed with a particular focus on negative emotions and suggesting a “neutral” nature of emotion recognition ability, which does not necessarily lead to moral behavior but can also be used for pursuing immoral goals. PMID:29131871

  7. Adaptive metric learning with deep neural networks for video-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping

    2018-01-01

    Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.

  8. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    ERIC Educational Resources Information Center

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  9. The Effects of Sex of Subject, Sex and Attractiveness of Photo on Facial Recognition.

    ERIC Educational Resources Information Center

    Carroo, Agatha W.; Mozingo, R.

    1989-01-01

    Assessed effect of sex of subject, and sex and attractiveness of photo on facial recognition with 25 male and 25 female college students. Found male subjects performed better with male faces with d' prime scores. (Author/ABL)

  10. Adrenergic enhancement of consolidation of object recognition memory.

    PubMed

    Dornelles, Arethuza; de Lima, Maria Noemia Martins; Grazziotin, Manoela; Presti-Torres, Juliana; Garcia, Vanessa Athaide; Scalco, Felipe Siciliani; Roesler, Rafael; Schröder, Nadja

    2007-07-01

    Extensive evidence indicates that epinephrine (EPI) modulates memory consolidation for emotionally arousing tasks in animals and human subjects. However, previous studies have not examined the effects of EPI on consolidation of recognition memory. Here we report that systemic administration of EPI enhances consolidation of memory for a novel object recognition (NOR) task under different training conditions. Control male rats given a systemic injection of saline (0.9% NaCl) immediately after NOR training showed significant memory retention when tested at 1.5 or 24, but not 96h after training. In contrast, rats given a post-training injection of EPI showed significant retention of NOR at all delays. In a second experiment using a different training condition, rats treated with EPI, but not SAL-treated animals, showed significant NOR retention at both 1.5 and 24-h delays. We next showed that the EPI-induced enhancement of retention tested at 96h after training was prevented by pretraining systemic administration of the beta-adrenoceptor antagonist propranolol. The findings suggest that, as previously observed in experiments using aversively motivated tasks, epinephrine modulates consolidation of recognition memory and that the effects require activation of beta-adrenoceptors.

  11. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  12. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    NASA Astrophysics Data System (ADS)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  13. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    ERIC Educational Resources Information Center

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  14. Proactive Interference in Short-Term Recognition and Recall Memory

    ERIC Educational Resources Information Center

    Dillon, Richard F.; Petrusic, William M.

    1972-01-01

    Purpose of study was to (a) compare the rate of increase of proactive interference over the first few trials under recall and recognition memory test conditions, (2) determine the effects of two types of distractors on short-term recognition, and (3) test memory after proactive interference had reached a stable level under each of three test…

  15. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    PubMed

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  16. Exogenous temporal cues enhance recognition memory in an object-based manner.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2010-11-01

    Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.

  17. Dentate gyrus supports slope recognition memory, shades of grey-context pattern separation and recognition memory, and CA3 supports pattern completion for object memory.

    PubMed

    Kesner, Raymond P; Kirk, Ryan A; Yu, Zhenghui; Polansky, Caitlin; Musso, Nick D

    2016-03-01

    In order to examine the role of the dorsal dentate gyrus (dDG) in slope (vertical space) recognition and possible pattern separation, various slope (vertical space) degrees were used in a novel exploratory paradigm to measure novelty detection for changes in slope (vertical space) recognition memory and slope memory pattern separation in Experiment 1. The results of the experiment indicate that control rats displayed a slope recognition memory function with a pattern separation process for slope memory that is dependent upon the magnitude of change in slope between study and test phases. In contrast, the dDG lesioned rats displayed an impairment in slope recognition memory, though because there was no significant interaction between the two groups and slope memory, a reliable pattern separation impairment for slope could not be firmly established in the DG lesioned rats. In Experiment 2, in order to determine whether, the dDG plays a role in shades of grey spatial context recognition and possible pattern separation, shades of grey were used in a novel exploratory paradigm to measure novelty detection for changes in the shades of grey context environment. The results of the experiment indicate that control rats displayed a shades of grey-context pattern separation effect across levels of separation of context (shades of grey). In contrast, the DG lesioned rats displayed a significant interaction between the two groups and levels of shades of grey suggesting impairment in a pattern separation function for levels of shades of grey. In Experiment 3 in order to determine whether the dorsal CA3 (dCA3) plays a role in object pattern completion, a new task requiring less training and using a choice that was based on choosing the correct set of objects on a two-choice discrimination task was used. The results indicated that control rats displayed a pattern completion function based on the availability of one, two, three or four cues. In contrast, the dCA3 lesioned rats

  18. Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.

    PubMed

    Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas

    2015-08-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Sex differences in emotion recognition: Evidence for a small overall female superiority on facial disgust.

    PubMed

    Connolly, Hannah L; Lefevre, Carmen E; Young, Andrew W; Lewis, Gary J

    2018-05-21

    Although it is widely believed that females outperform males in the ability to recognize other people's emotions, this conclusion is not well supported by the extant literature. The current study sought to provide a strong test of the female superiority hypothesis by investigating sex differences in emotion recognition for five basic emotions using stimuli well-calibrated for individual differences assessment, across two expressive domains (face and body), and in a large sample (N = 1,022: Study 1). We also assessed the stability and generalizability of our findings with two independent replication samples (N = 303: Study 2, N = 634: Study 3). In Study 1, we observed that females were superior to males in recognizing facial disgust and sadness. In contrast, males were superior to females in recognizing bodily happiness. The female superiority for recognition of facial disgust was replicated in Studies 2 and 3, and this observation also extended to an independent stimulus set in Study 2. No other sex differences were stable across studies. These findings provide evidence for the presence of sex differences in emotion recognition ability, but show that these differences are modest in magnitude and appear to be limited to facial disgust. We discuss whether this sex difference may reflect human evolutionary imperatives concerning reproductive fitness and child care. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  1. Meta-Analysis of Facial Emotion Recognition in Behavioral Variant Frontotemporal Dementia: Comparison With Alzheimer Disease and Healthy Controls.

    PubMed

    Bora, Emre; Velakoulis, Dennis; Walterfang, Mark

    2016-07-01

    Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD). The aim of this meta-analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta-analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d = 1.81) and AD (d = 1.23). In bvFTD, recognition of negative emotions, especially anger (d = 1.48) and disgust (d = 1.41), were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social-cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD. © The Author(s) 2016.

  2. ERP correlates of recognition memory in Autism Spectrum Disorder.

    PubMed

    Massand, Esha; Bowler, Dermot M; Mottron, Laurent; Hosein, Anthony; Jemel, Boutheina

    2013-09-01

    Recognition memory in autism spectrum disorder (ASD) tends to be undiminished compared to that of typically developing (TD) individuals (Bowler et al. 2007), but it is still unknown whether memory in ASD relies on qualitatively similar or different neurophysiology. We sought to explore the neural activity underlying recognition by employing the old/new word repetition event-related potential effect. Behavioural recognition performance was comparable across both groups, and demonstrated superior recognition for low frequency over high frequency words. However, the ASD group showed a parietal rather than anterior onset (300-500 ms), and diminished right frontal old/new effects (800-1500 ms) relative to TD individuals. This study shows that undiminished recognition performance results from a pattern of differing functional neurophysiology in ASD.

  3. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  4. Shy children are less sensitive to some cues to facial recognition.

    PubMed

    Brunet, Paul M; Mondloch, Catherine J; Schmidt, Louis A

    2010-02-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about pairs of faces that differed in the appearance of individual features, the shape of the external contour, or the spacing among features; their parent completed the Colorado childhood temperament inventory (CCTI). Children who scored higher on CCTI shyness made more errors than their non-shy counterparts only when discriminating faces based on the spacing of features. Differences in accuracy were not related to other scales of the CCTI. In Study 2, we showed that these differences were face-specific and cannot be attributed to differences in task difficulty. Findings suggest that shy children are less sensitive to some cues to facial recognition possibly underlying their inability to distinguish certain facial emotions in others, leading to a cascade of secondary negative effects in social behaviour.

  5. Process Demands of Rejection Mechanisms of Recognition Memory

    ERIC Educational Resources Information Center

    Odegard, Timothy N.; Koen, Joshua D.; Gama, Jorge M.

    2008-01-01

    A surge of research has been conducted to examine memory editing mechanisms that help distinguish accurate from inaccurate memories. In the present experiment, the authors examined the ability of participants to use novelty detection, recollection rejection, and plausibility judgments to reject lures presented on a recognition memory test.…

  6. Selective attention meets spontaneous recognition memory: Evidence for effects at retrieval.

    PubMed

    Moen, Katherine C; Miller, Jeremy K; Lloyd, Marianne E

    2017-03-01

    Previous research on the effects of Divided Attention on recognition memory have shown consistent impairments during encoding but more variable effects at retrieval. The present study explored whether effects of Selective Attention at retrieval and subsequent testing were parallel to those of Divided Attention. Participants studied a list of pictures and then had a recognition memory test that included both full attention and selective attention (the to be responded to object was overlaid atop a blue outlined object) trials. All participants then completed a second recognition memory test. The results of 2 experiments suggest that subsequent tests consistently show impacts of the status of the ignored stimulus, and that having an initial test changes performance on a later test. The results are discussed in relation to effect of attention on memory more generally as well as spontaneous recognition memory research. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Recognition memory of newly learned faces.

    PubMed

    Ishai, Alumit; Yago, Elena

    2006-12-11

    We used event-related fMRI to study recognition memory of newly learned faces. Caucasian subjects memorized unfamiliar, neutral and happy South Korean faces and 4 days later performed a memory retrieval task in the MR scanner. We predicted that previously seen faces would be recognized faster and more accurately and would elicit stronger neural activation than novel faces. Consistent with our hypothesis, novel faces were recognized more slowly and less accurately than previously seen faces. We found activation in a distributed cortical network that included face-responsive regions in the visual cortex, parietal and prefrontal regions, and the hippocampus. Within all regions, correctly recognized, previously seen faces evoked stronger activation than novel faces. Additionally, in parietal and prefrontal cortices, stronger activation was observed during correct than incorrect trials. Finally, in the hippocampus, false alarms to happy faces elicited stronger responses than false alarms to neutral faces. Our findings suggest that face recognition memory is mediated by stimulus-specific representations stored in extrastriate regions; parietal and prefrontal regions where old and new items are classified; and the hippocampus where veridical memory traces are recovered.

  8. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  9. Collaboration can improve individual recognition memory: evidence from immediate and delayed tests.

    PubMed

    Rajaram, Suparna; Pereira-Pasarin, Luciane P

    2007-02-01

    In two experiments, we tested the effects of collaboration on individual recognition memory. In Experiment 1, participants studied pictures and words either for meaning or for surface properties and made recognition memory judgments individually either following group discussion among 3 members (collaborative condition) or in the absence of discussion (noncollaborative condition). Levels of processing and picture superiority effects were replicated, and collaboration significantly increased individual recognition memory. Experiment 2 replicated this positive effect and showed that even though memory sensitivity declined at longer delays (48 h and 1 week), collaboration continued to exert a positive influence. These findings show that (1) consensus is not necessary for producing benefits of collaboration on individual recognition, (2) collaborative facilitation on individual memory is robust, and (3) collaboration enhances individual memory further if conditions predispose individual accuracy in the absence of collaboration.

  10. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  11. Attention to Social Stimuli and Facial Identity Recognition Skills in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Wilson, C. E.; Brock, J.; Palermo, R.

    2010-01-01

    Background: Previous research suggests that individuals with autism spectrum disorder (ASD) have a reduced preference for viewing social stimuli in the environment and impaired facial identity recognition. Methods: Here, we directly tested a link between these two phenomena in 13 ASD children and 13 age-matched typically developing (TD) controls.…

  12. The memory state heuristic: A formal model based on repeated recognition judgments.

    PubMed

    Castela, Marta; Erdfelder, Edgar

    2017-02-01

    The recognition heuristic (RH) theory predicts that, in comparative judgment tasks, if one object is recognized and the other is not, the recognized one is chosen. The memory-state heuristic (MSH) extends the RH by assuming that choices are not affected by recognition judgments per se, but by the memory states underlying these judgments (i.e., recognition certainty, uncertainty, or rejection certainty). Specifically, the larger the discrepancy between memory states, the larger the probability of choosing the object in the higher state. The typical RH paradigm does not allow estimation of the underlying memory states because it is unknown whether the objects were previously experienced or not. Therefore, we extended the paradigm by repeating the recognition task twice. In line with high threshold models of recognition, we assumed that inconsistent recognition judgments result from uncertainty whereas consistent judgments most likely result from memory certainty. In Experiment 1, we fitted 2 nested multinomial models to the data: an MSH model that formalizes the relation between memory states and binary choices explicitly and an approximate model that ignores the (unlikely) possibility of consistent guesses. Both models provided converging results. As predicted, reliance on recognition increased with the discrepancy in the underlying memory states. In Experiment 2, we replicated these results and found support for choice consistency predictions of the MSH. Additionally, recognition and choice latencies were in agreement with the MSH in both experiments. Finally, we validated critical parameters of our MSH model through a cross-validation method and a third experiment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Functional Connectivity of Multiple Brain Regions Required for the Consolidation of Social Recognition Memory.

    PubMed

    Tanimizu, Toshiyuki; Kenney, Justin W; Okano, Emiko; Kadoma, Kazune; Frankland, Paul W; Kida, Satoshi

    2017-04-12

    Social recognition memory is an essential and basic component of social behavior that is used to discriminate familiar and novel animals/humans. Previous studies have shown the importance of several brain regions for social recognition memories; however, the mechanisms underlying the consolidation of social recognition memory at the molecular and anatomic levels remain unknown. Here, we show a brain network necessary for the generation of social recognition memory in mice. A mouse genetic study showed that cAMP-responsive element-binding protein (CREB)-mediated transcription is required for the formation of social recognition memory. Importantly, significant inductions of the CREB target immediate-early genes c-fos and Arc were observed in the hippocampus (CA1 and CA3 regions), medial prefrontal cortex (mPFC), anterior cingulate cortex (ACC), and amygdala (basolateral region) when social recognition memory was generated. Pharmacological experiments using a microinfusion of the protein synthesis inhibitor anisomycin showed that protein synthesis in these brain regions is required for the consolidation of social recognition memory. These findings suggested that social recognition memory is consolidated through the activation of CREB-mediated gene expression in the hippocampus/mPFC/ACC/amygdala. Network analyses suggested that these four brain regions show functional connectivity with other brain regions and, more importantly, that the hippocampus functions as a hub to integrate brain networks and generate social recognition memory, whereas the ACC and amygdala are important for coordinating brain activity when social interaction is initiated by connecting with other brain regions. We have found that a brain network composed of the hippocampus/mPFC/ACC/amygdala is required for the consolidation of social recognition memory. SIGNIFICANCE STATEMENT Here, we identify brain networks composed of multiple brain regions for the consolidation of social recognition memory. We

  14. Memory evaluation in mild cognitive impairment using recall and recognition tests.

    PubMed

    Bennett, Ilana J; Golob, Edward J; Parker, Elizabeth S; Starr, Arnold

    2006-11-01

    Amnestic mild cognitive impairment (MCI) is a selective episodic memory deficit that often indicates early Alzheimer's disease. Episodic memory function in MCI is typically defined by deficits in free recall, but can also be tested using recognition procedures. To assess both recall and recognition in MCI, MCI (n = 21) and older comparison (n = 30) groups completed the USC-Repeatable Episodic Memory Test. Subjects memorized two verbally presented 15-item lists. One list was used for three free recall trials, immediately followed by yes/no recognition. The second list was used for three-alternative forced-choice recognition. Relative to the comparison group, MCI had significantly fewer hits and more false alarms in yes/no recognition, and were less accurate in forced-choice recognition. Signal detection analysis showed that group differences were not due to response bias. Discriminant function analysis showed that yes/no recognition was a better predictor of group membership than free recall or forced-choice measures. MCI subjects recalled fewer items than comparison subjects, with no group differences in repetitions, intrusions, serial position effects, or measures of recall strategy (subjective organization, recall consistency). Performance deficits on free recall and recognition in MCI suggest a combination of both tests may be useful for defining episodic memory impairment associated with MCI and early Alzheimer's disease.

  15. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    PubMed

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Object Recognition Memory and the Rodent Hippocampus

    ERIC Educational Resources Information Center

    Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.

    2010-01-01

    In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…

  17. Facial emotion recognition deficits: The new face of schizophrenia

    PubMed Central

    Behere, Rishikesh V.

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  18. Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease

    PubMed Central

    Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas

    2016-01-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443

  19. Sex influence on face recognition memory moderated by presentation duration and reencoding.

    PubMed

    Weirich, Sebastian; Hoffmann, Ferdinand; Meissner, Lucia; Heinz, Andreas; Bengner, Thomas

    2011-11-01

    It has been suggested that women have a better face recognition memory than men. Here we analyzed whether this advantage depends on a better encoding or consolidation of information and if the advantage is visible during short-term memory (STM), only, or whether it also remains evident in long-term memory (LTM). We tested short- and long-term face recognition memory in 36 nonclinical participants (19 women). We varied the duration of item presentation (1, 5, and 10 s), the time of testing (immediately after the study phase, 1 hr, and 24 hr later), and the possibility to reencode items (none, immediately after the study phase, after 1 hr). Women showed better overall face recognition memory than men (ηp² = .15, p < .05). We found this advantage, however, only with a longer duration of item presentation (interaction effect Sex × ηp² = .16, p < .05). Women's advantage in face recognition was visible mainly if participants had the possibility to reencode faces during former test trials. Our results suggest women do not have a better face recognition memory than men per se, but may profit more than men from longer durations of presentation during encoding or the possibility for reencoding. Future research on sex differences in face recognition memory should explicate possible causes for the better encoding of face information in women.

  20. Children's Recognition of Emotional Facial Expressions Through Photographs and Drawings.

    PubMed

    Brechet, Claire

    2017-01-01

    The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.

  1. Haloperidol increases false recognition memory of thematically related pictures in healthy volunteers.

    PubMed

    Guarnieri, Regina V; Buratto, Luciano G; Gomes, Carlos F A; Ribeiro, Rafaela L; de Souza, Altay A Lino; Stein, Lilian M; Galduróz, José C; Bueno, Orlando F A

    2017-01-01

    Dopamine can modulate long-term episodic memory. Its potential role on the generation of false memories, however, is less well known. In a randomized, double-blind, placebo-controlled experiment, 24 young healthy volunteers ingested a 4-mg oral dose of haloperidol, a dopamine D 2 -receptor antagonist, or placebo, before taking part in a recognition memory task. Haloperidol was active during both study and test phases of the experiment. Participants in the haloperidol group produced more false recognition responses than those in the placebo group, despite similar levels of correct recognition. These findings show that dopamine blockade in healthy volunteers can specifically increase false recognition memory. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Females scan more than males: a potential mechanism for sex differences in recognition memory.

    PubMed

    Heisz, Jennifer J; Pottruff, Molly M; Shore, David I

    2013-07-01

    Recognition-memory tests reveal individual differences in episodic memory; however, by themselves, these tests provide little information regarding the stage (or stages) in memory processing at which differences are manifested. We used eye-tracking technology, together with a recognition paradigm, to achieve a more detailed analysis of visual processing during encoding and retrieval. Although this approach may be useful for assessing differences in memory across many different populations, we focused on sex differences in face memory. Females outperformed males on recognition-memory tests, and this advantage was directly related to females' scanning behavior at encoding. Moreover, additional exposures to the faces reduced sex differences in face recognition, which suggests that males may be able to improve their recognition memory by extracting more information at encoding through increased scanning. A strategy of increased scanning at encoding may prove to be a simple way to enhance memory performance in other populations with memory impairment.

  3. Hemifield memory for attractiveness.

    PubMed

    Deblieck, C; Zaidel, D W

    2003-07-01

    In order to determine whether or not facial attractiveness plays a role in hemispheric facial memory, 35 right-handed participants first assigned attractiveness ratings to faces and then performed a recognition test on those faces in the left visual half-field (LVF) and right visual half-field (RVF). We found significant interactions between the experimental factors and visual half-field. There were significant differences in the extreme ends of the rating scale, that is, the very unattractive versus the very attractive faces: Female participants remembered very attractive faces of both women and men, with memory being superior in the RVF than in the LVF. In contrast, the male participants remembered very unattractive faces of both women and men; RVF memory was better than the LVF for women faces while for men faces memory was superior in the LVF. The interactions with visual half-field suggest that hemispheric biases in remembering faces are influenced by degree of attractiveness.

  4. Acute Alcohol Effects on Repetition Priming and Word Recognition Memory with Equivalent Memory Cues

    ERIC Educational Resources Information Center

    Ray, Suchismita; Bates, Marsha E.

    2006-01-01

    Acute alcohol intoxication effects on memory were examined using a recollection-based word recognition memory task and a repetition priming task of memory for the same information without explicit reference to the study context. Memory cues were equivalent across tasks; encoding was manipulated by varying the frequency of occurrence (FOC) of words…

  5. The interaction between embodiment and empathy in facial expression recognition

    PubMed Central

    Jospe, Karine; Flöel, Agnes; Lavidor, Michal

    2018-01-01

    Abstract Previous research has demonstrated that the Action-Observation Network (AON) is involved in both emotional-embodiment (empathy) and action-embodiment mechanisms. In this study, we hypothesized that interfering with the AON will impair action recognition and that this impairment will be modulated by empathy levels. In Experiment 1 (n = 90), participants were asked to recognize facial expressions while their facial motion was restricted. In Experiment 2 (n = 50), we interfered with the AON by applying transcranial Direct Current Stimulation to the motor cortex. In both experiments, we found that interfering with the AON impaired the performance of participants with high empathy levels; however, for the first time, we demonstrated that the interference enhanced the performance of participants with low empathy. This novel finding suggests that the embodiment module may be flexible, and that it can be enhanced in individuals with low empathy by simple manipulation of motor activation. PMID:29378022

  6. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  7. The posterior parietal cortex in recognition memory: a neuropsychological study.

    PubMed

    Haramati, Sharon; Soroker, Nachum; Dudai, Yadin; Levy, Daniel A

    2008-01-01

    Several recent functional neuroimaging studies have reported robust bilateral activation (L>R) in lateral posterior parietal cortex and precuneus during recognition memory retrieval tasks. It has not yet been determined what cognitive processes are represented by those activations. In order to examine whether parietal lobe-based processes are necessary for basic episodic recognition abilities, we tested a group of 17 first-incident CVA patients whose cortical damage included (but was not limited to) extensive unilateral posterior parietal lesions. These patients performed a series of tasks that yielded parietal activations in previous fMRI studies: yes/no recognition judgments on visual words and on colored object pictures and identifiable environmental sounds. We found that patients with left hemisphere lesions were not impaired compared to controls in any of the tasks. Patients with right hemisphere lesions were not significantly impaired in memory for visual words, but were impaired in recognition of object pictures and sounds. Two lesion--behavior analyses--area-based correlations and voxel-based lesion symptom mapping (VLSM)---indicate that these impairments resulted from extra-parietal damage, specifically to frontal and lateral temporal areas. These findings suggest that extensive parietal damage does not impair recognition performance. We suggest that parietal activations recorded during recognition memory tasks might reflect peri-retrieval processes, such as the storage of retrieved memoranda in a working memory buffer for further cognitive processing.

  8. Using Maintenance Rehearsal to Explore Recognition Memory

    ERIC Educational Resources Information Center

    Humphreys, Michael S.; Maguire, Angela M.; McFarlane, Kimberley A.; Burt, Jennifer S.; Bolland, Scott W.; Murray, Krista L.; Dunn, Ryan

    2010-01-01

    We examined associative and item recognition using the maintenance rehearsal paradigm. Our intent was to control for mnemonic strategies; to produce a low, graded level of learning; and to provide evidence of the role of attention in long-term memory. An advantage for low-frequency words emerged in both associative and item recognition at very low…

  9. The effect of forced choice on facial emotion recognition: a comparison to open verbal classification of emotion labels

    PubMed Central

    Limbrecht-Ecklundt, Kerstin; Scheck, Andreas; Jerg-Bretzke, Lucia; Walter, Steffen; Hoffmann, Holger; Traue, Harald C.

    2013-01-01

    Objective: This article includes the examination of potential methodological problems of the application of a forced choice response format in facial emotion recognition. Methodology: 33 subjects were presented with validated facial stimuli. The task was to make a decision about which emotion was shown. In addition, the subjective certainty concerning the decision was recorded. Results: The detection rates are 68% for fear, 81% for sadness, 85% for anger, 87% for surprise, 88% for disgust, and 94% for happiness, and are thus well above the random probability. Conclusion: This study refutes the concern that the use of forced choice formats may not adequately reflect actual recognition performance. The use of standardized tests to examine emotion recognition ability leads to valid results and can be used in different contexts. For example, the images presented here appear suitable for diagnosing deficits in emotion recognition in the context of psychological disorders and for mapping treatment progress. PMID:23798981

  10. Verbal Memory Functioning in Adolescents and Young Adults with Costello Syndrome: Evidence for Relative Preservation in Recognition Memory

    PubMed Central

    Schwartz, David D.; Katzenstein, Jennifer M.; Hopkins, Elisabeth; Stabley, Deborah L.; Sol-Church, Katia; Gripp, Karen W.; Axelrad, Marni E.

    2013-01-01

    Costello syndrome (CS) is a rare genetic disorder caused by germline mutations in the HRAS proto-oncogene which belongs to the family of syndromes called rasopathies. HRAS plays a key role in synaptic long-term potentiation (LTP) and memory formation. Prior research has found impaired recall memory in CS despite enhancement in LTP that would predict memory preservation. Based on findings in other rasopathies, we hypothesized that the memory deficit in CS would be specific to recall, and that recognition memory would show relative preservation. Memory was tested using word-list learning and story memory tasks with both recall and recognition trials, a design that allowed us to examine these processes separately. Participants were 11 adolescents and young adults with molecularly confirmed CS, all of whom fell in the mild to moderate range of intellectual disability. Results indicated a clear dissociation between verbal recall, which was impaired (M = 69 ± 14), and recognition memory, which was relatively intact (M = 86 ± 14). Story recognition was highly correlated with listening comprehension (r = .986), which also fell in the low-average range (M = 80 ± 12.9). Performance on other measures of linguistic ability and academic skills was impaired. The findings suggest relatively preserved recognition memory that also provides some support for verbal comprehension. This is the first report of relatively normal performance in a cognitive domain in CS. Further research is needed to better understand the mechanisms by which altered RAS-MAPK signaling affects neuronal plasticity and memory processes in the brain. PMID:23918324

  11. Fluency Effects in Recognition Memory: Are Perceptual Fluency and Conceptual Fluency Interchangeable?

    ERIC Educational Resources Information Center

    Lanska, Meredith; Olds, Justin M.; Westerman, Deanne L.

    2014-01-01

    On a recognition memory test, both perceptual and conceptual fluency can engender a sense of familiarity and elicit recognition memory illusions. To date, perceptual and conceptual fluency have been studied separately but are they interchangeable in terms of their influence on recognition judgments? Five experiments compared the effect of…

  12. The neural substrates of recognition memory for verbal information: spanning the divide between short- and long-term memory.

    PubMed

    Buchsbaum, Bradley R; Padmanabhan, Aarthi; Berman, Karen Faith

    2011-04-01

    One of the classic categorical divisions in the history of memory research is that between short-term and long-term memory. Indeed, because memory for the immediate past (a few seconds) and memory for the relatively more remote past (several seconds and beyond) are assumed to rely on distinct neural systems, more often than not, memory research has focused either on short- (or "working memory") or on long-term memory. Using an auditory-verbal continuous recognition paradigm designed for fMRI, we examined how the neural signatures of recognition memory change across an interval of time (from 2.5 to 30 sec) that spans this hypothetical division between short- and long-term memory. The results revealed that activity during successful auditory-verbal item recognition in inferior parietal cortex and the posterior superior temporal lobe was maximal for early lags, whereas, conversely, activity in the left inferior frontal gyrus increased as a function of lag. Taken together, the results reveal that as the interval between item repetitions increases, there is a shift in the distribution of memory-related activity that moves from posterior temporo-parietal cortex (lags 1-4) to inferior frontal regions (lags 5-10), indicating that as time advances, the burden of recognition memory is increasingly placed on top-down retrieval mechanisms that are mediated by structures in inferior frontal cortex.

  13. Hippocampal activity during recognition memory co-varies with the accuracy and confidence of source memory judgments.

    PubMed

    Yu, Sarah S; Johnson, Jeffrey D; Rugg, Michael D

    2012-06-01

    It has been proposed that the hippocampus selectively supports retrieval of contextual associations, but an alternative view holds that the hippocampus supports strong memories regardless of whether they contain contextual information. We employed a memory test that combined the 'Remember/Know' and source memory procedures, which allowed test items to be segregated both by memory strength (recognition accuracy) and, separately, by the quality of the contextual information that could be retrieved (indexed by the accuracy/confidence of a source memory judgment). As measured by fMRI, retrieval-related hippocampal activity tracked the quality of retrieved contextual information and not memory strength. These findings are consistent with the proposal that the hippocampus supports contextual recollection rather than recognition memory more generally. Copyright © 2011 Wiley Periodicals, Inc.

  14. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature.

    PubMed

    Tender, Jennifer A F; Ferreira, Carlos R

    2018-04-13

    Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls.

  15. What pharmacological interventions indicate concerning the role of the perirhinal cortex in recognition memory

    PubMed Central

    Brown, M.W.; Barker, G.R.I.; Aggleton, J.P.; Warburton, E.C.

    2012-01-01

    Findings of pharmacological studies that have investigated the involvement of specific regions of the brain in recognition memory are reviewed. The particular emphasis of the review concerns what such studies indicate concerning the role of the perirhinal cortex in recognition memory. Most of the studies involve rats and most have investigated recognition memory for objects. Pharmacological studies provide a large body of evidence supporting the essential role of the perirhinal cortex in the acquisition, consolidation and retrieval of object recognition memory. Such studies provide increasingly detailed evidence concerning both the neurotransmitter systems and the underlying intracellular mechanisms involved in recognition memory processes. They have provided evidence in support of synaptic weakening as a major synaptic plastic process within perirhinal cortex underlying object recognition memory. They have also supplied confirmatory evidence that that there is more than one synaptic plastic process involved. The demonstrated necessity to long-term recognition memory of intracellular signalling mechanisms related to synaptic modification within perirhinal cortex establishes a central role for the region in the information storage underlying such memory. Perirhinal cortex is thereby established as an information storage site rather than solely a processing station. Pharmacological studies have also supplied new evidence concerning the detailed roles of other regions, including the hippocampus and the medial prefrontal cortex in different types of recognition memory tasks that include a spatial or temporal component. In so doing, they have also further defined the contribution of perirhinal cortex to such tasks. To date it appears that the contribution of perirhinal cortex to associative and temporal order memory reflects that in simple object recognition memory, namely that perirhinal cortex provides information concerning objects and their prior occurrence (novelty

  16. The effect of word concreteness on recognition memory.

    PubMed

    Fliessbach, K; Weis, S; Klaver, P; Elger, C E; Weber, B

    2006-09-01

    Concrete words that are readily imagined are better remembered than abstract words. Theoretical explanations for this effect either claim a dual coding of concrete words in the form of both a verbal and a sensory code (dual-coding theory), or a more accessible semantic network for concrete words than for abstract words (context-availability theory). However, the neural mechanisms of improved memory for concrete versus abstract words are poorly understood. Here, we investigated the processing of concrete and abstract words during encoding and retrieval in a recognition memory task using event-related functional magnetic resonance imaging (fMRI). As predicted, memory performance was significantly better for concrete words than for abstract words. Abstract words elicited stronger activations of the left inferior frontal cortex both during encoding and recognition than did concrete words. Stronger activation of this area was also associated with successful encoding for both abstract and concrete words. Concrete words elicited stronger activations bilaterally in the posterior inferior parietal lobe during recognition. The left parietal activation was associated with correct identification of old stimuli. The anterior precuneus, left cerebellar hemisphere and the posterior and anterior cingulate cortex showed activations both for successful recognition of concrete words and for online processing of concrete words during encoding. Additionally, we observed a correlation across subjects between brain activity in the left anterior fusiform gyrus and hippocampus during recognition of learned words and the strength of the concreteness effect. These findings support the idea of specific brain processes for concrete words, which are reactivated during successful recognition.

  17. Individual differences in false memory from misinformation: cognitive factors.

    PubMed

    Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; Lin, Chongde; He, Qinghua; Chen, Chunhui; Li, He; Xue, Gui; Lu, Zhonglin; Dong, Qi

    2010-07-01

    This research investigated the cognitive correlates of false memories that are induced by the misinformation paradigm. A large sample of Chinese college students (N=436) participated in a misinformation procedure and also took a battery of cognitive tests. Results revealed sizable and systematic individual differences in false memory arising from exposure to misinformation. False memories were significantly and negatively correlated with measures of intelligence (measured with Raven's Advanced Progressive Matrices and Wechsler Adult Intelligence Scale), perception (Motor-Free Visual Perception Test, Change Blindness, and Tone Discrimination), memory (Wechsler Memory Scales and 2-back Working Memory tasks), and face judgement (Face Recognition and Facial Expression Recognition). These findings suggest that people with relatively low intelligence and poor perceptual abilities might be more susceptible to the misinformation effect.

  18. Facial biases on vocal perception and memory.

    PubMed

    Boltz, Marilyn G

    2017-06-01

    Does a speaker's face influence the way their voice is heard and later remembered? This question was addressed through two experiments where in each, participants listened to middle-aged voices accompanied by faces that were either age-appropriate, younger or older than the voice or, as a control, no face at all. In Experiment 1, participants evaluated each voice on various acoustical dimensions and speaker characteristics. The results showed that facial displays influenced perception such that the same voice was heard differently depending on the age of the accompanying face. Experiment 2 further revealed that facial displays led to memory distortions that were age-congruent in nature. These findings illustrate that faces can activate certain social categories and preconceived stereotypes that then influence vocal and person perception in a corresponding fashion. Processes of face/voice integration are very similar to those of music/film, indicating that the two areas can mutually inform one another and perhaps, more generally, reflect a centralized mechanism of cross-sensory integration. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Witnesses' blindness for their own facial recognition decisions: a field study.

    PubMed

    Sagana, Anna; Sauerland, Melanie; Merckelbach, Harald

    2013-01-01

    In a field study, we examined choice blindness for eyewitnesses' facial recognition decisions. Seventy-one pedestrians were engaged in a conversation by two experimenters who pretended to be tourists in the center of a European city. After a short interval, pedestrians were asked to identify the two experimenters from separate simultaneous six-person photo lineups. Following each of the two forced-choice recognition decisions, they were confronted with their selection and asked to motivate their decision. However, for one of the recognition decisions, the chosen lineup member was exchanged with a previously unidentified member. Blindness for this identity manipulation occurred at the rate of 40.8%. Furthermore, the detection rate varied as a function of similarity (high vs. low) between the original choice and the manipulated outcome. Finally, choice manipulations undermined the confidence-accuracy relation for detectors to a greater degree than for blind participants. Stimulus ambiguity is discussed as a moderator of choice blindness. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Recognition Decisions from Visual Working Memory Are Mediated by Continuous Latent Strengths

    ERIC Educational Resources Information Center

    Ricker, Timothy J.; Thiele, Jonathan E.; Swagman, April R.; Rouder, Jeffrey N.

    2017-01-01

    Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the…

  1. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: a randomised, double-blind, placebo-controlled study in cannabis users.

    PubMed

    Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie

    2015-03-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: A randomised, double-blind, placebo-controlled study in cannabis users

    PubMed Central

    Hindocha, Chandni; Freeman, Tom P.; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K.; Morgan, Celia J.A.; Curran, H. Valerie

    2015-01-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8 mg), CBD (16 mg), THC+CBD (8 mg+16 mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling ‘stoned’ was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being ‘stoned’. CBD did not influence feelings of ‘stoned’. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. PMID:25534187

  3. The Impact of Left and Right Intracranial Tumors on Picture and Word Recognition Memory

    ERIC Educational Resources Information Center

    Goldstein, Bram; Armstrong, Carol L.; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V.

    2004-01-01

    This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH…

  4. Selective impairment of facial recognition due to a haematoma restricted to the right fusiform and lateral occipital region

    PubMed Central

    Wada, Y; Yamamoto, T

    2001-01-01

    A 67 year old right handed Japanese man developed prosopagnosia caused by a haemorrhage. His only deficit was the inability to perceive and discriminate unfamiliar faces, and to recognise familiar faces. He did not show deficits in visual or visuospatial perception of non-facial stimuli, alexia, visual agnosia, or topographical disorientation. Brain MRI showed a haematoma limited to the right fusiform and the lateral occipital region. Single photon emission computed tomography confirmed that there was no decreased blood flow in the opposite left cerebral hemisphere. The present case indicates that a well placed small right fusiform gyrus and the adjacent area can cause isolated impairment of facial recognition. As far as we know, there has been no published case that has demonstrated this exact lesion site, which was indicated by recent functional MRI studies as the most critical area in facial recognition.

 PMID:11459906

  5. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  6. The effect of mood-context on visual recognition and recall memory.

    PubMed

    Robinson, Sarita J; Rollings, Lucy J L

    2011-01-01

    Although it is widely known that memory is enhanced when encoding and retrieval occur in the same state, the impact of elevated stress/arousal is less understood. This study explores mood-dependent memory's effects on visual recognition and recall of material memorized either in a neutral mood or under higher stress/arousal levels. Participants' (N = 60) recognition and recall were assessed while they experienced either the same o a mismatched mood at retrieval. The results suggested that both visual recognition and recall memory were higher when participants experienced the same mood at encoding and retrieval compared with those who experienced a mismatch in mood context between encoding and retrieval. These findings offer support for a mood dependency effect on both the recognition and recall of visual information.

  7. Neurotrophins play differential roles in short and long-term recognition memory.

    PubMed

    Callaghan, Charlotte K; Kelly, Aine M

    2013-09-01

    The neurotrophin family of proteins are believed to mediate various forms of synaptic plasticity in the adult brain. Here we have assessed the roles of these proteins in object recognition memory in the rat, using icv infusions of function-blocking antibodies or the tyrosine kinase antagonist, tyrphostin AG879, to block Trk receptors. We report that tyrphostin AG879 impairs both short-term and long-term recognition memory, indicating a requirement for Trk receptor activation in both processes. The effect of inhibition of each of the neurotrophins with activity-blocking neutralising antibodies was also tested. Treatment with anti-BDNF, anti-NGF or anti-NT4 had no effect on short-term memory, but blocked long-term recognition memory. Treatment with anti-NT3 had no effect on either process. We also assessed changes in expression of neurotrophins and their respective receptors in the hippocampus, dentate gyrus and perirhinal cortex over a 24 h period following training in the object recognition task. We observed time-dependent changes in expression of the Trk receptors and their ligands in the dentate gyrus and perirhinal cortex. The data are consistent with a pivotal role for neurotrophic factors in the expression of recognition memory. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Recognition Decisions From Visual Working Memory Are Mediated by Continuous Latent Strengths.

    PubMed

    Ricker, Timothy J; Thiele, Jonathan E; Swagman, April R; Rouder, Jeffrey N

    2017-08-01

    Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the contents of visual working memory follow a continuous decision process of graded information about the correct choice or a discrete decision process reflecting only knowing and guessing. We find a clear pattern in favor of a continuous latent strength model of visual working memory-based decision making, supporting the notion that visual recognition decision processes are impacted by the degree of matching between the contents of working memory and the choices given. Relation to relevant findings and the implications for human information processing more generally are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  9. Effects of Pre-Experimental Knowledge on Recognition Memory

    ERIC Educational Resources Information Center

    Bird, Chris M.; Davies, Rachel A.; Ward, Jamie; Burgess, Neil

    2011-01-01

    The influence of pre-experimental autobiographical knowledge on recognition memory was investigated using as memoranda faces that were either personally known or unknown to the participant. Under a dual process theory, such knowledge boosted both recollection- and familiarity-based recognition judgements. Under an unequal variance signal detection…

  10. The Doors and People Test: The Effect of Frontal Lobe Lesions on Recall and Recognition Memory Performance

    PubMed Central

    2016-01-01

    Objective: Memory deficits in patients with frontal lobe lesions are most apparent on free recall tasks that require the selection, initiation, and implementation of retrieval strategies. The effect of frontal lesions on recognition memory performance is less clear with some studies reporting recognition memory impairments but others not. The majority of these studies do not directly compare recall and recognition within the same group of frontal patients, assessing only recall or recognition memory performance. Other studies that do compare recall and recognition in the same frontal group do not consider recall or recognition tests that are comparable for difficulty. Recognition memory impairments may not be reported because recognition memory tasks are less demanding. Method: This study aimed to investigate recall and recognition impairments in the same group of 47 frontal patients and 78 healthy controls. The Doors and People Test was administered as a neuropsychological test of memory as it assesses both verbal and visual recall and recognition using subtests that are matched for difficulty. Results: Significant verbal and visual recall and recognition impairments were found in the frontal patients. Conclusion: These results demonstrate that when frontal patients are assessed on recall and recognition memory tests of comparable difficulty, memory impairments are found on both types of episodic memory test. PMID:26752123

  11. The Doors and People Test: The effect of frontal lobe lesions on recall and recognition memory performance.

    PubMed

    MacPherson, Sarah E; Turner, Martha S; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

    2016-03-01

    Memory deficits in patients with frontal lobe lesions are most apparent on free recall tasks that require the selection, initiation, and implementation of retrieval strategies. The effect of frontal lesions on recognition memory performance is less clear with some studies reporting recognition memory impairments but others not. The majority of these studies do not directly compare recall and recognition within the same group of frontal patients, assessing only recall or recognition memory performance. Other studies that do compare recall and recognition in the same frontal group do not consider recall or recognition tests that are comparable for difficulty. Recognition memory impairments may not be reported because recognition memory tasks are less demanding. This study aimed to investigate recall and recognition impairments in the same group of 47 frontal patients and 78 healthy controls. The Doors and People Test was administered as a neuropsychological test of memory as it assesses both verbal and visual recall and recognition using subtests that are matched for difficulty. Significant verbal and visual recall and recognition impairments were found in the frontal patients. These results demonstrate that when frontal patients are assessed on recall and recognition memory tests of comparable difficulty, memory impairments are found on both types of episodic memory test. (c) 2016 APA, all rights reserved).

  12. Effects of hydrocortisone on false memory recognition in healthy men and women.

    PubMed

    Duesenberg, Moritz; Weber, Juliane; Schaeuffele, Carmen; Fleischer, Juliane; Hellmann-Regen, Julian; Roepke, Stefan; Moritz, Steffen; Otte, Christian; Wingenfeld, Katja

    2016-12-01

    Most of the studies focusing on the effect of stress on false memories by using psychosocial and physiological stressors yielded diverse results. In the present study, we systematically tested the effect of exogenous hydrocortisone using a false memory paradigm. In this placebo-controlled study, 37 healthy men and 38 healthy women (mean age 24.59 years) received either 10 mg of hydrocortisone or placebo 75 min before using the false memory, that is, Deese-Roediger-McDermott (DRM), paradigm. We used emotionally charged and neutral DRM-based word lists to look for false recognition rates in comparison to true recognition rates. Overall, we expected an increase in false memory after hydrocortisone compared to placebo. No differences between the cortisol and the placebo group were revealed for false and for true recognition performance. In general, false recognition rates were lower compared to true recognition rates. Furthermore, we found a valence effect (neutral, positive, negative, disgust word stimuli), indicating higher rates of true and false recognition for emotional compared to neutral words. We further found an interaction effect between sex and recognition. Post hoc t tests showed that for true recognition women showed a significantly better memory performance than men, independent of treatment. This study does not support the hypothesis that cortisol decreases the ability to distinguish between old versus novel words in young healthy individuals. However, sex and emotional valence of word stimuli appear to be important moderators. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Music, memory, and Alzheimer's disease: is music recognition spared in dementia, and how can it be assessed?

    PubMed

    Cuddy, Lola L; Duffin, Jacalyn

    2005-01-01

    Despite intriguing and suggestive clinical observations, no formal research has assessed the possible sparing of musical recognition and memory in Alzheimer's dementia (AD). A case study is presented of an 84-year old woman with severe cognitive impairment implicating AD, but for whom music recognition and memory, according to her caregivers, appeared to be spared. The hypotheses addressed were, first, that memory for familiar music may be spared in dementia, and second, that musical recognition and memory may be reliably assessed with existing tests if behavioral observation is employed to overcome the problem of verbal or written communication. Our hypotheses were stimulated by the patient EN, for whom diagnosis of AD became probable in 2000. With severe problems in memory, language, and cognition, she now has a mini-mental status score of 8 (out of 30) and is unable to understand or recall standard instructions. In order to assess her music recognition abilities, three tests from the previous literature were adapted for behavioral observation. Two tests involved the discrimination of familiar melodies from unfamiliar melodies. The third involved the detection of distortions ("wrong" notes) in familiar melodies and discrimination of distorted melodies from melodies correctly reproduced. Test melodies were presented to EN on a CD player and her responses were observed by two test administrators. EN responded to familiar melodies by singing along, usually with the words, and often continuing to sing after the stimulus had stopped. She never responded to the unfamiliar melodies. She responded to distorted melodies with facial expressions - surprise, laughter, a frown, or an exclamation, "Oh, dear!"; she never responded in this way to the undistorted melodies. Allowing these responses as indicators of detection, the results for EN were in the normal or near normal range of scores for elderly controls. As well, lyrics to familiar melodies, spoken in a conversational

  14. Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia

    ERIC Educational Resources Information Center

    Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie

    2012-01-01

    Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…

  15. Recognition of facial emotion and affective prosody in children with ASD (+ADHD) and their unaffected siblings.

    PubMed

    Oerlemans, Anoek M; van der Meer, Jolanda M J; van Steijn, Daphne J; de Ruiter, Saskia W; de Bruijn, Yvette G E; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J

    2014-05-01

    Autism is a highly heritable and clinically heterogeneous neuropsychiatric disorder that frequently co-occurs with other psychopathologies, such as attention-deficit/hyperactivity disorder (ADHD). An approach to parse heterogeneity is by forming more homogeneous subgroups of autism spectrum disorder (ASD) patients based on their underlying, heritable cognitive vulnerabilities (endophenotypes). Emotion recognition is a likely endophenotypic candidate for ASD and possibly for ADHD. Therefore, this study aimed to examine whether emotion recognition is a viable endophenotypic candidate for ASD and to assess the impact of comorbid ADHD in this context. A total of 90 children with ASD (43 with and 47 without ADHD), 79 ASD unaffected siblings, and 139 controls aged 6-13 years, were included to test recognition of facial emotion and affective prosody. Our results revealed that the recognition of both facial emotion and affective prosody was impaired in children with ASD and aggravated by the presence of ADHD. The latter could only be partly explained by typical ADHD cognitive deficits, such as inhibitory and attentional problems. The performance of unaffected siblings could overall be considered at an intermediate level, performing somewhat worse than the controls and better than the ASD probands. Our findings suggest that emotion recognition might be a viable endophenotype in ASD and a fruitful target in future family studies of the genetic contribution to ASD and comorbid ADHD. Furthermore, our results suggest that children with comorbid ASD and ADHD are at highest risk for emotion recognition problems.

  16. Facial affect recognition in early and late-stage schizophrenia patients.

    PubMed

    Romero-Ferreiro, María Verónica; Aguado, Luis; Rodriguez-Torresano, Javier; Palomo, Tomás; Rodriguez-Jimenez, Roberto; Pedreira-Massa, José Luis

    2016-04-01

    Prior studies have shown deficits in social cognition and emotion perception in first-episode psychosis (FEP) and multi-episode schizophrenia (MES) patients. These studies compared patients at different stages of the illness with only a single control group which differed in age from at least one clinical group. The present study provides new evidence of a differential pattern of deficit in facial affect recognition in FEP and MES patients using a double age-matched control design. Compared to their controls, FEP patients only showed impaired recognition of fearful faces (p=.007). In contrast to this, the MES patients showed a more generalized deficit compared to their age-matched controls, with impaired recognition of angry, sad and fearful faces (ps<.01) and an increased misattribution of emotional meaning to neutral faces. PANSS scores of FEP patients on Depressed factor correlated positively with the accuracy to recognize fearful expressions (r=.473). For the MES group fear recognition correlated positively with negative PANSS factor (r=.498) and recognition of sad and neutral expressions was inversely correlated with disorganized PANSS factor (r=-.461 and r=-.541, respectively). These results provide evidence that a generalized impairment of affect recognition is observed in advanced-stage patients and is not characteristic of the early stages of schizophrenia. Moreover, the finding that anomalous attribution of emotional meaning to neutral faces is observed only in MES patients suggests that an increased attribution of salience to social stimuli is a characteristic of social cognition in advanced stages of the disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Enhanced tactile encoding and memory recognition in congenital blindness.

    PubMed

    D'Angiulli, Amedeo; Waraich, Paul

    2002-06-01

    Several behavioural studies have shown that early-blind persons possess superior tactile skills. Since neurophysiological data show that early-blind persons recruit visual as well as somatosensory cortex to carry out tactile processing (cross-modal plasticity), blind persons' sharper tactile skills may be related to cortical re-organisation resulting from loss of vision early in their life. To examine the nature of blind individuals' tactile superiority and its implications for cross-modal plasticity, we compared the tactile performance of congenitally totally blind, low-vision and sighted children on raised-line picture identification test and re-test, assessing effects of task familiarity, exploratory strategy and memory recognition. What distinguished the blind from the other children was higher memory recognition and higher tactile encoding associated with efficient exploration. These results suggest that enhanced perceptual encoding and recognition memory may be two cognitive correlates of cross-modal plasticity in congenital blindness.

  18. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature

    PubMed Central

    Tender, Jennifer A.F.; Ferreira, Carlos R.

    2018-01-01

    BACKGROUND: Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. OBJECTIVE: To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. METHODS: We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. CONCLUSION: The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls. PMID:29682451

  19. Schematic drawings of facial expressions for emotion recognition and interpretation by preschool-aged children.

    PubMed

    MacDonald, P M; Kirkpatrick, S W; Sullivan, L A

    1996-11-01

    Schematic drawings of facial expressions were evaluated as a possible assessment tool for research on emotion recognition and interpretation involving young children. A subset of Ekman and Friesen's (1976) Pictures of Facial Affect was used as the standard for comparison. Preschool children (N = 138) were shown drawing and photographs in two context conditions for six emotions (anger, disgust, fear, happiness, sadness, and surprise). The overall correlation between accuracy for the photographs and drawings was .677. A significant difference was found for the stimulus condition (photographs vs. drawings) but not for the administration condition (label-based vs. context-based). Children were significantly more accurate in interpreting drawings than photographs and tended to be more accurate in identifying facial expressions in the label-based administration condition for both photographs and drawings than in the context-based administration condition.

  20. Ventromedial prefrontal cortex is obligatory for consolidation and reconsolidation of object recognition memory.

    PubMed

    Akirav, Irit; Maroun, Mouna

    2006-12-01

    Once consolidated, a long-term memory item could regain susceptibility to consolidation blockers, that is, reconsolidate, upon its reactivation. Both consolidation and reconsolidation require protein synthesis, but it is not yet known how similar these processes are in terms of molecular, cellular, and neural circuit mechanisms. Whereas most previous studies focused on aversive conditioning in the amygdala and the hippocampus, here we examine the role of the ventromedial prefrontal cortex (vmPFC) in consolidation and reconsolidation of object recognition memory. Object recognition memory is the ability to discriminate the familiarity of previously encountered objects. We found that microinfusion of the protein synthesis inhibitor anisomycin or the N-methyl-D-aspartate (NMDA) receptor antagonist D,L-2-amino-5-phosphonovaleric acid (APV) into the vmPFC, immediately after training, resulted in impairment of long-term (24 h) but not short-term (3 h) recognition memory. Similarly, microinfusion of anisomycin or APV into the vmPFC immediately after reactivation of the long-term memory impaired recognition memory 24 h, but not 3 h, post-reactivation. These results indicate that both protein synthesis and NMDA receptors are required for consolidation and reconsolidation of recognition memory in the vmPFC.

  1. Using maintenance rehearsal to explore recognition memory.

    PubMed

    Humphreys, Michael S; Maguire, Angela M; McFarlane, Kimberley A; Burt, Jennifer S; Bolland, Scott W; Murray, Krista L; Dunn, Ryan

    2010-01-01

    We examined associative and item recognition using the maintenance rehearsal paradigm. Our intent was to control for mnemonic strategies; to produce a low, graded level of learning; and to provide evidence of the role of attention in long-term memory. An advantage for low-frequency words emerged in both associative and item recognition at very low levels of learning. This early emergence casts doubt on explanations based on the traditional concept of recollection. A comparison of false alarms supports a role for item information or the joint use of cues but not familiarity in producing associative false alarms. We may also have found a way to measure the amount of attention being paid to a to-be-learned item or pair, independently of memory performance on the attended item. This result may be an important step in determining whether coherent theories about the role of attention in long- and short-term memory can be created. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  2. Recognition of facial, auditory, and bodily emotions in older adults.

    PubMed

    Ruffman, Ted; Halberstadt, Jamin; Murray, Janice

    2009-11-01

    Understanding older adults' social functioning difficulties requires insight into their recognition of emotion processing in voices and bodies, not just faces, the focus of most prior research. We examined 60 young and 61 older adults' recognition of basic emotions in facial, vocal, and bodily expressions, and when matching faces and bodies to voices, using 120 emotion items. Older adults were worse than young adults in 17 of 30 comparisons, with consistent difficulties in recognizing both positive (happy) and negative (angry and sad) vocal and bodily expressions. Nearly three quarters of older adults functioned at a level similar to the lowest one fourth of young adults, suggesting that age-related changes are common. In addition, we found that older adults' difficulty in matching emotions was not explained by difficulty on the component sources (i.e., faces or voices on their own), suggesting an additional problem of integration.

  3. Facial Affect Recognition in Violent and Nonviolent Antisocial Behavior Subtypes.

    PubMed

    Schönenberg, Michael; Mayer, Sarah Verena; Christian, Sandra; Louis, Katharina; Jusyte, Aiste

    2016-10-01

    Prior studies provide evidence for impaired recognition of distress cues in individuals exhibiting antisocial behavior. However, it remains unclear whether this deficit is generally associated with antisociality or may be specific to violent behavior only. To examine whether there are meaningful differences between the two behavioral dimensions rule-breaking and aggression, violent and nonviolent incarcerated offenders as well as control participants were presented with an animated face recognition task in which a video sequence of a neutral face changed into an expression of one of the six basic emotions. The participants were instructed to press a button as soon as they were able to identify the emotional expression, allowing for an assessment of the perceived emotion onset. Both aggressive and nonaggressive offenders demonstrated a delayed perception of primarily fearful facial cues as compared to controls. These results suggest the importance of targeting impaired emotional processing in both types of antisocial behavior.

  4. Working memory and the identification of facial expression in patients with left frontal glioma.

    PubMed

    Mu, Yong-Gao; Huang, Ling-Juan; Li, Shi-Yun; Ke, Chao; Chen, Yu; Jin, Yu; Chen, Zhong-Ping

    2012-09-01

    Patients with brain tumors may have cognitive dysfunctions including memory deterioration, such as working memory, that affect quality of life. This study was to explore the presence of defects in working memory and the identification of facial expressions in patients with left frontal glioma. This case-control study recruited 11 matched pairs of patients and healthy control subjects (mean age ± standard deviation, 37.00 ± 10.96 years vs 36.73 ± 11.20 years; 7 male and 4 female) from March through December 2011. The psychological tests contained tests that estimate verbal/visual-spatial working memory, executive function, and the identification of facial expressions. According to the paired samples analysis, there were no differences in the anxiety and depression scores or in the intelligence quotients between the 2 groups (P > .05). All indices of the Digit Span Test were significantly worse in patients than in control subjects (P < .05), but the Tapping Test scores did not differ between patient and control groups. Of all 7 Wisconsin Card Sorting Test (WCST) indexes, only the Preservative Response was significantly different between patients and control subjects (P < .05). Patients were significantly less accurate in detecting angry facial expressions than were control subjects (30.3% vs 57.6%; P < .05) but showed no deficits in the identification of other expressions. The backward indexes of the Digit Span Test were associated with emotion scores and tumor size and grade (P < .05). Patients with left frontal glioma had deficits in verbal working memory and the ability to identify anger. These may have resulted from damage to functional frontal cortex regions, in which roles in these 2 capabilities have not been confirmed. However, verbal working memory performance might be affected by emotional and tumor-related factors.

  5. Interplay between affect and arousal in recognition memory.

    PubMed

    Greene, Ciara M; Bahri, Pooja; Soto, David

    2010-07-23

    Emotional states linked to arousal and mood are known to affect the efficiency of cognitive performance. However, the extent to which memory processes may be affected by arousal, mood or their interaction is poorly understood. Following a study phase of abstract shapes, we altered the emotional state of participants by means of exposure to music that varied in both mood and arousal dimensions, leading to four different emotional states: (i) positive mood-high arousal; (ii) positive mood-low arousal; (iii) negative mood-high arousal; (iv) negative mood-low arousal. Following the emotional induction, participants performed a memory recognition test. Critically, there was an interaction between mood and arousal on recognition performance. Memory was enhanced in the positive mood-high arousal and in the negative mood-low arousal states, relative to the other emotional conditions. Neither mood nor arousal alone but their interaction appears most critical to understanding the emotional enhancement of memory.

  6. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  7. Improved facial affect recognition in schizophrenia following an emotion intervention, but not training attention-to-facial-features or treatment-as-usual.

    PubMed

    Tsotsi, Stella; Kosmidis, Mary H; Bozikas, Vasilis P

    2017-08-01

    In schizophrenia, impaired facial affect recognition (FAR) has been associated with patients' overall social functioning. Interventions targeting attention or FAR per se have invariably yielded improved FAR performance in these patients. Here, we compared the effects of two interventions, one targeting FAR and one targeting attention-to-facial-features, with treatment-as-usual on patients' FAR performance. Thirty-nine outpatients with schizophrenia were randomly assigned to one of three groups: FAR intervention (training to recognize emotional information, conveyed by changes in facial features), attention-to-facial-features intervention (training to detect changes in facial features), and treatment-as-usual. Also, 24 healthy controls, matched for age and education, were assigned to one of the two interventions. Two FAR measurements, baseline and post-intervention, were conducted using an original experimental procedure with alternative sets of stimuli. We found improved FAR performance following the intervention targeting FAR in comparison to the other patient groups, which in fact was comparable to the pre-intervention performance of healthy controls in the corresponding intervention group. This improvement was more pronounced in recognizing fear. Our findings suggest that compared to interventions targeting attention, and treatment-as-usual, training programs targeting FAR can be more effective in improving FAR in patients with schizophrenia, particularly assisting them in perceiving threat-related information more accurately. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  8. Nobiletin improves emotional and novelty recognition memory but not spatial referential memory.

    PubMed

    Kang, Jiyun; Shin, Jung-Won; Kim, Yoo-Rim; Swanberg, Kelley M; Kim, Yooseung; Bae, Jae Ryong; Kim, Young Ki; Lee, Jinwon; Kim, Soo-Yeon; Sohn, Nak-Won; Maeng, Sungho

    2017-01-01

    How to maintain and enhance cognitive functions for both aged and young populations is a highly interesting subject. But candidate memory-enhancing reagents are tested almost exclusively on lesioned or aged animals. Also, there is insufficient information on the type of memory these reagents can improve. Working memory, located in the prefrontal cortex, manages short-term sensory information, but, by gaining significant relevance, this information is converted to long-term memory by hippocampal formation and/or amygdala, followed by tagging with space-time or emotional cues, respectively. Nobiletin is a product of citrus peel known for cognitive-enhancing effects in various pharmacological and neurodegenerative disease models, yet, it is not well studied in non-lesioned animals and the type of memory that nobiletin can improve remains unclear. In this study, 8-week-old male mice were tested using behavioral measurements for working, spatial referential, emotional and visual recognition memory after daily administration of nobiletin. While nobiletin did not induce any change of spontaneous activity in the open field test, freezing by fear conditioning and novel object recognition increased. However, the effectiveness of spatial navigation in the Y-maze and Morris water maze was not improved. These results mean that nobiletin can specifically improve memories of emotionally salient information associated with fear and novelty, but not of spatial information without emotional saliency. Accordingly, the use of nobiletin on normal subjects as a memory enhancer would be more effective on emotional types but may have limited value for the improvement of episodic memories.

  9. Unraveling the Contributions of the Diencephalon to Recognition Memory: A Review

    ERIC Educational Resources Information Center

    Aggleton, John P.; Dumont, Julie R.; Warburton, Elizabeth Clea

    2011-01-01

    Both clinical investigations and studies with animals reveal nuclei within the diencephalon that are vital for recognition memory (the judgment of prior occurrence). This review seeks to identify these nuclei and to consider why they might be important for recognition memory. Despite the lack of clinical cases with circumscribed pathology within…

  10. High speed optical object recognition processor with massive holographic memory

    NASA Technical Reports Server (NTRS)

    Chao, T.; Zhou, H.; Reyes, G.

    2002-01-01

    Real-time object recognition using a compact grayscale optical correlator will be introduced. A holographic memory module for storing a large bank of optimum correlation filters, to accommodate the large data throughput rate needed for many real-world applications, has also been developed. System architecture of the optical processor and the holographic memory will be presented. Application examples of this object recognition technology will also be demonstrated.

  11. In the face of emotions: event-related potentials in supraliminal and subliminal facial expression recognition.

    PubMed

    Balconi, Michela; Lucchiari, Claudio

    2005-02-01

    Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.

  12. Callous-unemotional traits and empathy deficits: Mediating effects of affective perspective-taking and facial emotion recognition.

    PubMed

    Lui, Joyce H L; Barry, Christopher T; Sacco, Donald F

    2016-09-01

    Although empathy deficits are thought to be associated with callous-unemotional (CU) traits, findings remain equivocal and little is known about what specific abilities may underlie these purported deficits. Affective perspective-taking (APT) and facial emotion recognition may be implicated, given their independent associations with both empathy and CU traits. The current study examined how CU traits relate to cognitive and affective empathy and whether APT and facial emotion recognition mediate these relations. Participants were 103 adolescents (70 males) aged 16-18 attending a residential programme. CU traits were negatively associated with cognitive and affective empathy to a similar degree. The association between CU traits and affective empathy was partially mediated by APT. Results suggest that assessing mechanisms that may underlie empathic deficits, such as perspective-taking, may be important for youth with CU traits and may inform targets of intervention.

  13. How similar are recognition memory and inductive reasoning?

    PubMed

    Hayes, Brett K; Heit, Evan

    2013-07-01

    Conventionally, memory and reasoning are seen as different types of cognitive activities driven by different processes. In two experiments, we challenged this view by examining the relationship between recognition memory and inductive reasoning involving multiple forms of similarity. A common study set (members of a conjunctive category) was followed by a test set containing old and new category members, as well as items that matched the study set on only one dimension. The study and test sets were presented under recognition or induction instructions. In Experiments 1 and 2, the inductive property being generalized was varied in order to direct attention to different dimensions of similarity. When there was no time pressure on decisions, patterns of positive responding were strongly affected by property type, indicating that different types of similarity were driving recognition and induction. By comparison, speeded judgments showed weaker property effects and could be explained by generalization based on overall similarity. An exemplar model, GEN-EX (GENeralization from EXamples), could account for both the induction and recognition data. These findings show that induction and recognition share core component processes, even when the tasks involve flexible forms of similarity.

  14. Social Recognition Memory Requires Two Stages of Protein Synthesis in Mice

    ERIC Educational Resources Information Center

    Wolf, Gerald; Engelmann, Mario; Richter, Karin

    2005-01-01

    Olfactory recognition memory was tested in adult male mice using a social discrimination task. The testing was conducted to begin to characterize the role of protein synthesis and the specific brain regions associated with activity in this task. Long-term olfactory recognition memory was blocked when the protein synthesis inhibitor anisomycin was…

  15. Object memory effects on figure assignment: conscious object recognition is not necessary or sufficient.

    PubMed

    Peterson, M A; de Gelder, B; Rapcsak, S Z; Gerhardstein, P C; Bachoud-Lévi, A

    2000-01-01

    In three experiments we investigated whether conscious object recognition is necessary or sufficient for effects of object memories on figure assignment. In experiment 1, we examined a brain-damaged participant, AD, whose conscious object recognition is severely impaired. AD's responses about figure assignment do reveal effects from memories of object structure, indicating that conscious object recognition is not necessary for these effects, and identifying the figure-ground test employed here as a new implicit test of access to memories of object structure. In experiments 2 and 3, we tested a second brain-damaged participant, WG, for whom conscious object recognition was relatively spared. Nevertheless, effects from memories of object structure on figure assignment were not evident in WG's responses about figure assignment in experiment 2, indicating that conscious object recognition is not sufficient for effects of object memories on figure assignment. WG's performance sheds light on AD's performance, and has implications for the theoretical understanding of object memory effects on figure assignment.

  16. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    DTIC Science & Technology

    2010-04-01

    County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame

  17. Memory Asymmetry of Forward and Backward Associations in Recognition Tasks

    PubMed Central

    Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han

    2013-01-01

    There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326

  18. Fan Size and Foil Type in Recognition Memory.

    ERIC Educational Resources Information Center

    Walls, Richard T.; And Others

    An experiment involving 20 graduate and undergraduate students (7 males and 13 females) at West Virginia University (Morgantown) assessed "fan network structures" of recognition memory. A fan in network memory structure occurs when several facts are connected into a single node (concept). The more links from that concept to various…

  19. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    PubMed

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  20. Emotional availability, understanding emotions, and recognition of facial emotions in obese mothers with young children.

    PubMed

    Bergmann, Sarah; von Klitzing, Kai; Keitel-Korndörfer, Anja; Wendt, Verena; Grube, Matthias; Herpertz, Sarah; Schütz, Astrid; Klein, Annette M

    2016-01-01

    Recent research has identified mother-child relationships of low quality as possible risk factors for childhood obesity. However, it remains open how mothers' own obesity influences the quality of mother-child interaction, and particularly emotional availability (EA). Also unclear is the influence of maternal emotional competencies, i.e. understanding emotions and recognizing facial emotions. This study aimed to (1) investigate differences between obese and normal-weight mothers regarding mother-child EA, maternal understanding emotions and recognition of facial emotions, and (2) explore how maternal emotional competencies and maternal weight interact with each other in predicting EA. A better understanding of these associations could inform strategies of obesity prevention especially in children at risk. We assessed EA, understanding emotions and recognition of facial emotions in 73 obese versus 73 normal-weight mothers, and their children aged 6 to 47 months (Mchild age=24.49, 80 females). Obese mothers showed lower EA and understanding emotions. Mothers' normal weight and their ability to understand emotions were positively associated with EA. The ability to recognize facial emotions was positively associated with EA in obese but not in normal-weight mothers. Maternal weight status indirectly influenced EA through its effect on understanding emotions. Maternal emotional competencies may play an important role for establishing high EA in interaction with the child. Children of obese mothers experience lower EA, which may contribute to overweight development. We suggest including elements that aim to improve maternal emotional competencies and mother-child EA in prevention or intervention programmes targeting childhood obesity. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Holographic implementation of a binary associative memory for improved recognition

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.

    1998-03-01

    Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.

  2. Astrocytes contribute to gamma oscillations and recognition memory.

    PubMed

    Lee, Hosuk Sean; Ghetti, Andrea; Pinto-Duarte, António; Wang, Xin; Dziewczapolski, Gustavo; Galimi, Francesco; Huitron-Resendiz, Salvador; Piña-Crespo, Juan C; Roberts, Amanda J; Verma, Inder M; Sejnowski, Terrence J; Heinemann, Stephen F

    2014-08-12

    Glial cells are an integral part of functional communication in the brain. Here we show that astrocytes contribute to the fast dynamics of neural circuits that underlie normal cognitive behaviors. In particular, we found that the selective expression of tetanus neurotoxin (TeNT) in astrocytes significantly reduced the duration of carbachol-induced gamma oscillations in hippocampal slices. These data prompted us to develop a novel transgenic mouse model, specifically with inducible tetanus toxin expression in astrocytes. In this in vivo model, we found evidence of a marked decrease in electroencephalographic (EEG) power in the gamma frequency range in awake-behaving mice, whereas neuronal synaptic activity remained intact. The reduction in cortical gamma oscillations was accompanied by impaired behavioral performance in the novel object recognition test, whereas other forms of memory, including working memory and fear conditioning, remained unchanged. These results support a key role for gamma oscillations in recognition memory. Both EEG alterations and behavioral deficits in novel object recognition were reversed by suppression of tetanus toxin expression. These data reveal an unexpected role for astrocytes as essential contributors to information processing and cognitive behavior.

  3. Neural correlates of recognition memory of social information in people with schizophrenia.

    PubMed

    Harvey, Philippe-Olivier; Lepage, Martin

    2014-03-01

    Social dysfunction is a hallmark characteristic of schizophrenia. Part of it may stem from an inability to efficiently encode social information into memory and retrieve it later. This study focused on whether patients with schizophrenia show a memory boost for socially relevant information and engage the same neural network as controls when processing social stimuli that were previously encoded into memory. Patients with schizophrenia and healthy controls performed a social and nonsocial picture recognition memory task while being scanned. We calculated memory performance using d'. Our main analysis focused on brain activity associated with recognition memory of social and nonsocial pictures. Our study included 28 patients with schizophrenia and 26 controls. Healthy controls demonstrated a memory boost for socially relevant information. In contrast, patients with schizophrenia failed to show enhanced recognition sensitivity for social pictures. At the neural level, patients did not engage the dorsomedial prefrontal cortex (DMPFC) as much as controls while recognizing social pictures. Our study did not include direct measures of self-referential processing. All but 3 patients were taking antipsychotic medications, which may have altered both the behavioural performance during the picture recognition memory task and brain activity. Impaired social memory in patients with schizophrenia may be associated with altered DMPFC activity. A reduction of DMPFC activity may reflect less involvement of self-referential processes during memory retrieval. Our functional MRI results contribute to a better mapping of the neural disturbances associated with social memory impairment in patients with schizophrenia and may facilitate the development of innovative treatments, such as transcranial magnetic stimulation.

  4. Modeling Confidence and Response Time in Recognition Memory

    ERIC Educational Resources Information Center

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  5. PKC-epsilon activation is required for recognition memory in the rat.

    PubMed

    Zisopoulou, Styliani; Asimaki, Olga; Leondaritis, George; Vasilaki, Anna; Sakellaridis, Nikos; Pitsikas, Nikolaos; Mangoura, Dimitra

    2013-09-15

    Activation of PKCɛ, an abundant and developmentally regulated PKC isoform in the brain, has been implicated in memory throughout life and across species. Yet, direct evidence for a mechanistic role for PKCɛ in memory is still lacking. Hence, we sought to evaluate this in rats, using short-term treatments with two PKCɛ-selective peptides, the inhibitory ɛV1-2 and the activating ψɛRACK, and the novel object recognition task (NORT). Our results show that the PKCɛ-selective activator ψɛRACK, did not have a significant effect on recognition memory. In the short time frames used, however, inhibition of PKCɛ activation with the peptide inhibitor ɛV1-2 significantly impaired recognition memory. Moreover, when we addressed at the molecular level the immediate proximal signalling events of PKCɛ activation in acutely dissected rat hippocampi, we found that ψɛRACK increased in a time-dependent manner phosphorylation of MARCKS and activation of Src, Raf, and finally ERK1/2, whereas ɛV1-2 inhibited all basal activity of this pathway. Taken together, these findings present the first direct evidence that PKCɛ activation is an essential molecular component of recognition memory and point toward the use of systemically administered PKCɛ-regulating peptides as memory study tools and putative therapeutic agents. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Food restriction affects Y-maze spatial recognition memory in developing mice.

    PubMed

    Fu, Yu; Chen, Yanmei; Li, Liane; Wang, Yumei; Kong, Xiangyang; Wang, Jianhong

    2017-08-01

    The ambiguous effects of food restriction (FR) on cognition in rodents have been mostly explored in the aged brain by a variety of paradigms, in which either rewards or punishments are involved. This study aims to examine the effects of chronic and acute FR with varying intensities on spatial recognition memory in developing mice. We have used a Y-maze task that is based on the innate tendency of rodents to explore novel environments. In chronic FR, mice had 70-30% chow of control for seven weeks. In acute FR, mice were food restricted for 12-48h before the tests. We found that chronic FR had no effect on the preference of mice for novelty in the Y-maze, but severe FR (50-30% of control) caused impairment on spatial recognition memory. The impairment significantly correlated with the slow weight growth induced by FR. Acute FR also did not affect the novelty preference of mice, but either improved or impaired the memory retention. These data suggest chronic FR impairs Y-maze spatial recognition memory in developing mice depending on FR intensity and individual tolerability of the FR. Moreover, acute FR exerts diverse effects on the memory, either positive or negative. Our findings have revealed new insights on the effects of FR on spatial recognition memory in developing animals. Copyright © 2017 ISDN. Published by Elsevier Ltd. All rights reserved.

  7. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    ERIC Educational Resources Information Center

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  8. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  9. Deficits in long-term recognition memory reveal dissociated subtypes in congenital prosopagnosia.

    PubMed

    Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo

    2011-01-25

    The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception.

  10. Deficits in Long-Term Recognition Memory Reveal Dissociated Subtypes in Congenital Prosopagnosia

    PubMed Central

    Stollhoff, Rainer; Jost, Jürgen; Elze, Tobias; Kennerknecht, Ingo

    2011-01-01

    The study investigates long-term recognition memory in congenital prosopagnosia (CP), a lifelong impairment in face identification that is present from birth. Previous investigations of processing deficits in CP have mostly relied on short-term recognition tests to estimate the scope and severity of individual deficits. We firstly report on a controlled test of long-term (one year) recognition memory for faces and objects conducted with a large group of participants with CP. Long-term recognition memory is significantly impaired in eight CP participants (CPs). In all but one case, this deficit was selective to faces and didn't extend to intra-class recognition of object stimuli. In a test of famous face recognition, long-term recognition deficits were less pronounced, even after accounting for differences in media consumption between controls and CPs. Secondly, we combined test results on long-term and short-term recognition of faces and objects, and found a large heterogeneity in severity and scope of individual deficits. Analysis of the observed heterogeneity revealed a dissociation of CP into subtypes with a homogeneous phenotypical profile. Thirdly, we found that among CPs self-assessment of real-life difficulties, based on a standardized questionnaire, and experimentally assessed face recognition deficits are strongly correlated. Our results demonstrate that controlled tests of long-term recognition memory are needed to fully assess face recognition deficits in CP. Based on controlled and comprehensive experimental testing, CP can be dissociated into subtypes with a homogeneous phenotypical profile. The CP subtypes identified align with those found in prosopagnosia caused by cortical lesions; they can be interpreted with respect to a hierarchical neural system for face perception. PMID:21283572

  11. In search of a recognition memory engram

    PubMed Central

    Brown, M.W.; Banks, P.J.

    2015-01-01

    A large body of data from human and animal studies using psychological, recording, imaging, and lesion techniques indicates that recognition memory involves at least two separable processes: familiarity discrimination and recollection. Familiarity discrimination for individual visual stimuli seems to be effected by a system centred on the perirhinal cortex of the temporal lobe. The fundamental change that encodes prior occurrence within the perirhinal cortex is a reduction in the responses of neurones when a stimulus is repeated. Neuronal network modelling indicates that a system based on such a change in responsiveness is potentially highly efficient in information theoretic terms. A review is given of findings indicating that perirhinal cortex acts as a storage site for recognition memory of objects and that such storage depends upon processes producing synaptic weakening. PMID:25280908

  12. Behavioral and Neuroimaging Evidence for Facial Emotion Recognition in Elderly Korean Adults with Mild Cognitive Impairment, Alzheimer’s Disease, and Frontotemporal Dementia

    PubMed Central

    Park, Soowon; Kim, Taehoon; Shin, Seong A; Kim, Yu Kyeong; Sohn, Bo Kyung; Park, Hyeon-Ju; Youn, Jung-Hae; Lee, Jun-Young

    2017-01-01

    Background: Facial emotion recognition (FER) is impaired in individuals with frontotemporal dementia (FTD) and Alzheimer’s disease (AD) when compared to healthy older adults. Since deficits in emotion recognition are closely related to caregiver burden or social interactions, researchers have fundamental interest in FER performance in patients with dementia. Purpose: The purpose of this study was to identify the performance profiles of six facial emotions (i.e., fear, anger, disgust, sadness, surprise, and happiness) and neutral faces measured among Korean healthy control (HCs), and those with mild cognitive impairment (MCI), AD, and FTD. Additionally, the neuroanatomical correlates of facial emotions were investigated. Methods: A total of 110 (33 HC, 32 MCI, 32 AD, 13 FTD) older adult participants were recruited from two different medical centers in metropolitan areas of South Korea. These individuals underwent an FER test that was used to assess the recognition of emotions or absence of emotion (neutral) in 35 facial stimuli. Repeated measures two-way analyses of variance were used to examine the distinct profiles of emotional recognition among the four groups. We also performed brain imaging and voxel-based morphometry (VBM) on the participants to examine the associations between FER scores and gray matter volume. Results: The mean score of negative emotion recognition (i.e., fear, anger, disgust, and sadness) clearly discriminated FTD participants from individuals with MCI and AD and HC [F(3,106) = 10.829, p < 0.001, η2 = 0.235], whereas the mean score of positive emotion recognition (i.e., surprise and happiness) did not. A VBM analysis showed negative emotions were correlated with gray matter volume of anterior temporal regions, whereas positive emotions were related to gray matter volume of fronto-parietal regions. Conclusion: Impairment of negative FER in patients with FTD is cross-cultural. The discrete neural correlates of FER indicate that emotional

  13. Recognition memory in developmental prosopagnosia: electrophysiological evidence for abnormal routes to face recognition

    PubMed Central

    Burns, Edwin J.; Tree, Jeremy J.; Weidemann, Christoph T.

    2014-01-01

    Dual process models of recognition memory propose two distinct routes for recognizing a face: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct “remember” responses and more false alarms than controls. EEG results showed that posterior “remember” old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior “know” old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal “know” old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects. PMID:25177283

  14. Hippocampal Infusion of Zeta Inhibitory Peptide Impairs Recent, but Not Remote, Recognition Memory in Rats

    PubMed Central

    Hales, Jena B.; Ocampo, Amber C.; Broadbent, Nicola J.; Clark, Robert E.

    2015-01-01

    Spatial memory in rodents can be erased following the infusion of zeta inhibitory peptide (ZIP) into the dorsal hippocampus via indwelling guide cannulas. It is believed that ZIP impairs spatial memory by reversing established late-phase long-term potentiation (LTP). However, it is unclear whether other forms of hippocampus-dependent memory, such as recognition memory, are also supported by hippocampal LTP. In the current study, we tested recognition memory in rats following hippocampal ZIP infusion. In order to combat the limited targeting of infusions via cannula, we implemented a stereotaxic approach for infusing ZIP throughout the dorsal, intermediate, and ventral hippocampus. Rats infused with ZIP 3–7 days after training on the novel object recognition task exhibited impaired object recognition memory compared to control rats (those infused with aCSF). In contrast, rats infused with ZIP 1 month after training performed similar to control rats. The ability to form new memories after ZIP infusions remained intact. We suggest that enhanced recognition memory for recent events is supported by hippocampal LTP, which can be reversed by hippocampal ZIP infusion. PMID:26380123

  15. Gender differences in facial emotion recognition in persons with chronic schizophrenia.

    PubMed

    Weiss, Elisabeth M; Kohler, Christian G; Brensinger, Colleen M; Bilker, Warren B; Loughead, James; Delazer, Margarete; Nolan, Karen A

    2007-03-01

    The aim of the present study was to investigate possible sex differences in the recognition of facial expressions of emotion and to investigate the pattern of classification errors in schizophrenic males and females. Such an approach provides an opportunity to inspect the degree to which males and females differ in perceiving and interpreting the different emotions displayed to them and to analyze which emotions are most susceptible to recognition errors. Fifty six chronically hospitalized schizophrenic patients (38 men and 18 women) completed the Penn Emotion Recognition Test (ER40), a computerized emotion discrimination test presenting 40 color photographs of evoked happy, sad, anger, fear expressions and neutral expressions balanced for poser gender and ethnicity. We found a significant sex difference in the patterns of error rates in the Penn Emotion Recognition Test. Neutral faces were more commonly mistaken as angry in schizophrenic men, whereas schizophrenic women misinterpreted neutral faces more frequently as sad. Moreover, female faces were better recognized overall, but fear was better recognized in same gender photographs, whereas anger was better recognized in different gender photographs. The findings of the present study lend support to the notion that sex differences in aggressive behavior could be related to a cognitive style characterized by hostile attributions to neutral faces in schizophrenic men.

  16. Individual differences in language and working memory affect children's speech recognition in noise.

    PubMed

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.

  17. Neural correlates of recognition memory of social information in people with schizophrenia

    PubMed Central

    Harvey, Philippe-Olivier; Lepage, Martin

    2014-01-01

    Background Social dysfunction is a hallmark characteristic of schizophrenia. Part of it may stem from an inability to efficiently encode social information into memory and retrieve it later. This study focused on whether patients with schizophrenia show a memory boost for socially relevant information and engage the same neural network as controls when processing social stimuli that were previously encoded into memory. Methods Patients with schizophrenia and healthy controls performed a social and nonsocial picture recognition memory task while being scanned. We calculated memory performance using d′. Our main analysis focused on brain activity associated with recognition memory of social and nonsocial pictures. Results Our study included 28 patients with schizophrenia and 26 controls. Healthy controls demonstrated a memory boost for socially relevant information. In contrast, patients with schizophrenia failed to show enhanced recognition sensitivity for social pictures. At the neural level, patients did not engage the dorsomedial prefrontal cortex (DMPFC) as much as controls while recognizing social pictures. Limitations Our study did not include direct measures of self-referential processing. All but 3 patients were taking antipsychotic medications, which may have altered both the behavioural performance during the picture recognition memory task and brain activity. Conclusion Impaired social memory in patients with schizophrenia may be associated with altered DMPFC activity. A reduction of DMPFC activity may reflect less involvement of self-referential processes during memory retrieval. Our functional MRI results contribute to a better mapping of the neural disturbances associated with social memory impairment in patients with schizophrenia and may facilitate the development of innovative treatments, such as transcranial magnetic stimulation. PMID:24119792

  18. Visual memory in unilateral spatial neglect: immediate recall versus delayed recognition.

    PubMed

    Moreh, Elior; Malkinson, Tal Seidel; Zohary, Ehud; Soroker, Nachum

    2014-09-01

    Patients with unilateral spatial neglect (USN) often show impaired performance in spatial working memory tasks, apart from the difficulty retrieving "left-sided" spatial data from long-term memory, shown in the "piazza effect" by Bisiach and colleagues. This study's aim was to compare the effect of the spatial position of a visual object on immediate and delayed memory performance in USN patients. Specifically, immediate verbal recall performance, tested using a simultaneous presentation of four visual objects in four quadrants, was compared with memory in a later-provided recognition task, in which objects were individually shown at the screen center. Unlike healthy controls, USN patients showed a left-side disadvantage and a vertical bias in the immediate free recall task (69% vs. 42% recall for right- and left-sided objects, respectively). In the recognition task, the patients correctly recognized half of "old" items, and their correct rejection rate was 95.5%. Importantly, when the analysis focused on previously recalled items (in the immediate task), no statistically significant difference was found in the delayed recognition of objects according to their original quadrant of presentation. Furthermore, USN patients were able to recollect the correct original location of the recognized objects in 60% of the cases, well beyond chance level. This suggests that the memory trace formed in these cases was not only semantic but also contained a visuospatial tag. Finally, successful recognition of objects missed in recall trials points to formation of memory traces for neglected contralesional objects, which may become accessible to retrieval processes in explicit memory.

  19. The role of unconscious memory errors in judgments of confidence for sentence recognition.

    PubMed

    Sampaio, Cristina; Brewer, William F

    2009-03-01

    The present experiment tested the hypothesis that unconscious reconstructive memory processing can lead to the breakdown of the relationship between memory confidence and memory accuracy. Participants heard deceptive schema-inference sentences and nondeceptive sentences and were tested with either simple or forced-choice recognition. The nondeceptive items showed a positive relation between confidence and accuracy in both simple and forced-choice recognition. However, the deceptive items showed a strong negative confidence/accuracy relationship in simple recognition and a low positive relationship in forced choice. The mean levels of confidence for erroneous responses for deceptive items were inappropriately high in simple recognition but lower in forced choice. These results suggest that unconscious reconstructive memory processes involved in memory for the deceptive schema-inference items led to inaccurate confidence judgments and that, when participants were made aware of the deceptive nature of the schema-inference items through the use of a forced-choice procedure, they adjusted their confidence accordingly.

  20. Face puzzle—two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    PubMed Central

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive

  1. Face Recognition Vendor Test 2000: Evaluation Report

    DTIC Science & Technology

    2001-02-16

    The biggest change in the facial recognition community since the completion of the FERET program has been the introduction of facial recognition products...program and significantly lowered system costs. Today there are dozens of facial recognition systems available that have the potential to meet...inquiries from numerous government agencies on the current state of facial recognition technology prompted the DoD Counterdrug Technology Development Program

  2. Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression

    PubMed Central

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2012-01-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression. PMID:20617892

  3. Spaced learning enhances subsequent recognition memory by reducing neural repetition suppression.

    PubMed

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2011-07-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression.

  4. The effects of acute social isolation on long-term social recognition memory.

    PubMed

    Leser, Noam; Wagner, Shlomo

    2015-10-01

    The abilities to recognize individual animals of the same species and to distinguish them from other individuals are the basis for all mammalian social organizations and relationships. These abilities, termed social recognition memory, can be explored in mice and rats using their innate tendency to investigate novel social stimuli more persistently than familiar ones. Using this methodology it was found that social recognition memory is mediated by a specific neural network in the brain, the activity of which is modulated by several molecules, such the neuropeptides oxytocin and vasopressin. During the last 15 years several independent studies have revealed that social recognition memory of mice and rats depends upon their housing conditions. Specifically, long-term social recognition memory cannot be formed as shortly as few days following social isolation of the animal. This rapid and reversible impairment caused by acute social isolation seems to be specific to social memory and has not been observed in other types of memory. Here we review these studies and suggest that this unique system may serve for exploring of the mechanisms underlying the well-known negative effects of partial or perceived social isolation on human mental health. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. The face of fear and anger: Facial width-to-height ratio biases recognition of angry and fearful expressions.

    PubMed

    Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt

    2018-04-01

    The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. False recognition of facial expressions of emotion: causes and implications.

    PubMed

    Fernández-Dols, José-Miguel; Carrera, Pilar; Barchard, Kimberly A; Gacitua, Marta

    2008-08-01

    This article examines the importance of semantic processes in the recognition of emotional expressions, through a series of three studies on false recognition. The first study found a high frequency of false recognition of prototypical expressions of emotion when participants viewed slides and video clips of nonprototypical fearful and happy expressions. The second study tested whether semantic processes caused false recognition. The authors found that participants made significantly higher error rates when asked to detect expressions that corresponded to semantic labels than when asked to detect visual stimuli. Finally, given that previous research reported that false memories are less prevalent in younger children, the third study tested whether false recognition of prototypical expressions increased with age. The authors found that 67% of eight- to nine-year-old children reported nonpresent prototypical expressions of fear in a fearful context, but only 40% of 6- to 7-year-old children did so. Taken together, these three studies demonstrate the importance of semantic processes in the detection and categorization of prototypical emotional expressions.

  7. The effect of mild acute stress during memory consolidation on emotional recognition memory.

    PubMed

    Corbett, Brittany; Weinberg, Lisa; Duarte, Audrey

    2017-11-01

    Stress during consolidation improves recognition memory performance. Generally, this memory benefit is greater for emotionally arousing stimuli than neutral stimuli. The strength of the stressor also plays a role in memory performance, with memory performance improving up to a moderate level of stress and thereafter worsening. As our daily stressors are generally minimal in strength, we chose to induce mild acute stress to determine its effect on memory performance. In the current study, we investigated if mild acute stress during consolidation improves memory performance for emotionally arousing images. To investigate this, we had participants encode highly arousing negative, minimally arousing negative, and neutral images. We induced stress using the Montreal Imaging Stress Task (MIST) in half of the participants and a control task to the other half of the participants directly after encoding (i.e. during consolidation) and tested recognition 48h later. We found no difference in memory performance between the stress and control group. We found a graded pattern among confidence, with responders in the stress group having the least amount of confidence in their hits and controls having the most. Across groups, we found highly arousing negative images were better remembered than minimally arousing negative or neutral images. Although stress did not affect memory accuracy, responders, as defined by cortisol reactivity, were less confident in their decisions. Our results suggest that the daily stressors humans experience, regardless of their emotional affect, do not have adverse effects on memory. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Impaired social recognition memory in Recombination Activating Gene 1-deficient mice

    PubMed Central

    McGowan, Patrick O.; Hope, Thomas A.; Meck, Warren H.; Kelsoe, Garnett; Williams, Christina L.

    2012-01-01

    The Recombination Activating Genes (RAGs) encode two enzymes that play key roles in the adaptive immune system. RAG1 and RAG2 mediate VDJ recombination, a process necessary for the maturation of B- and T-cells. Interestingly, RAG1 is also expressed in the brain, particularly in areas of high neural density such as the hippocampus, although its function is unknown. We tested evidence that RAG1 plays a role in brain function using a social recognition memory task, an assessment of the acquisition and retention of conspecific identity. In a first experiment, we found that RAG1-deficient mice show impaired social recognition memory compared to mice wildtype for the RAG1 allele. In a second experiment, by breeding to homogenize background genotype we found that RAG1-deficient mice show impaired social recognition memory relative to heterozygous or RAG2-deficient littermates. Because RAG1 and RAG2 null mice are both immunodeficient, the results suggest that the memory impairment is not an indirect effect of immunological dysfunction. RAG1-deficient mice show normal habituation to non-socially derived odors and habituation to an open-field, indicating that the observed effect is not likely a result of a general deficit in habituation to novelty. These data trace the origin of the impairment in social recognition memory in RAG1-deficient mice to the RAG1 gene locus and implicate RAG1 in memory formation. PMID:21354115

  9. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    PubMed

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks.

  10. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    PubMed Central

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally

  11. MAO-A Phenotype Effects Response Sensitivity and the Parietal Old/New Effect during Recognition Memory

    PubMed Central

    Ross, Robert S.; Smolen, Andrew; Curran, Tim; Nyhus, Erika

    2018-01-01

    A critical problem for developing personalized treatment plans for cognitive disruptions is the lack of understanding how individual differences influence cognition. Recognition memory is one cognitive ability that varies from person to person and that variation may be related to different genetic phenotypes. One gene that may impact recognition memory is the monoamine oxidase A gene (MAO-A), which influences the transcription rate of MAO-A. Examination of how MAO-A phenotypes impact behavioral and event-related potentials (ERPs) correlates of recognition memory may help explain individual differences in recognition memory performance. Therefore, the current study uses electroencephalography (EEG) in combination with genetic phenotyping of the MAO-A gene to determine how well-characterized ERP components of recognition memory, the early frontal old/new effect, left parietal old/new effect, late frontal old/new effect, and the late posterior negativity (LPN) are impacted by MAO-A phenotype during item and source memory. Our results show that individuals with the MAO-A phenotype leading to increased transcription have lower response sensitivity during both item and source memory. Additionally, during item memory the left parietal old/new effect is not present due to increased ERP amplitude for correct rejections. The results suggest that MAO-A phenotype changes EEG correlates of recognition memory and influences how well individuals differentiate between old and new items. PMID:29487517

  12. Biometric correspondence between reface computerized facial approximations and CT-derived ground truth skin surface models objectively examined using an automated facial recognition system.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-05-01

    This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p<0.01) increases in identification rates were observed between R 10 and R 25 (8% mean increase) and R 25 and R 50 (3% mean increase). No significant (p>0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.

  13. Recognition of facial emotion and perceived parental bonding styles in healthy volunteers and personality disorder patients.

    PubMed

    Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei

    2011-12-01

    Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  14. Basic perceptual changes that alter meaning and neural correlates of recognition memory

    PubMed Central

    Gao, Chuanji; Hermiller, Molly S.; Voss, Joel L.; Guo, Chunyan

    2015-01-01

    It is difficult to pinpoint the border between perceptual and conceptual processing, despite their treatment as distinct entities in many studies of recognition memory. For instance, alteration of simple perceptual characteristics of a stimulus can radically change meaning, such as the color of bread changing from white to green. We sought to better understand the role of perceptual and conceptual processing in memory by identifying the effects of changing a basic perceptual feature (color) on behavioral and neural correlates of memory in circumstances when this change would be expected to either change the meaning of a stimulus or to have no effect on meaning (i.e., to influence conceptual processing or not). Abstract visual shapes (“squiggles”) were colorized during study and presented during test in either the same color or a different color. Those squiggles that subjects found to resemble meaningful objects supported behavioral measures of conceptual priming, whereas meaningless squiggles did not. Further, changing color from study to test had a selective effect on behavioral correlates of priming for meaningful squiggles, indicating that color change altered conceptual processing. During a recognition memory test, color change altered event-related brain potential (ERP) correlates of memory for meaningful squiggles but not for meaningless squiggles. Specifically, color change reduced the amplitude of frontally distributed N400 potentials (FN400), implying that these potentials indicated conceptual processing during recognition memory that was sensitive to color change. In contrast, color change had no effect on FN400 correlates of recognition for meaningless squiggles, which were overall smaller in amplitude than for meaningful squiggles (further indicating that these potentials signal conceptual processing during recognition). Thus, merely changing the color of abstract visual shapes can alter their meaning, changing behavioral and neural correlates of

  15. Memory for faces: the effect of facial appearance and the context in which the face is encountered.

    PubMed

    Mattarozzi, Katia; Todorov, Alexander; Codispoti, Maurizio

    2015-03-01

    We investigated the effects of appearance of emotionally neutral faces and the context in which the faces are encountered on incidental face memory. To approximate real-life situations as closely as possible, faces were embedded in a newspaper article, with a headline that specified an action performed by the person pictured. We found that facial appearance affected memory so that faces perceived as trustworthy or untrustworthy were remembered better than neutral ones. Furthermore, the memory of untrustworthy faces was slightly better than that of trustworthy faces. The emotional context of encoding affected the details of face memory. Faces encountered in a neutral context were more likely to be recognized as only familiar. In contrast, emotionally relevant contexts of encoding, whether pleasant or unpleasant, increased the likelihood of remembering semantic and even episodic details associated with faces. These findings suggest that facial appearance (i.e., perceived trustworthiness) affects face memory. Moreover, the findings support prior evidence that the engagement of emotion processing during memory encoding increases the likelihood that events are not only recognized but also remembered.

  16. Age-Related Differences in Recognition Memory for Items and Associations: Contribution of Individual Differences in Working Memory and Metamemory

    PubMed Central

    Bender, Andrew R.; Raz, Naftali

    2012-01-01

    Ability to form new associations between unrelated items is particularly sensitive to aging, but the reasons for such differential vulnerability are unclear. In this study, we examined the role of objective and subjective factors (working memory and beliefs about memory strategies) on differential relations of age with recognition of items and associations. Healthy adults (N = 100, age 21 to 79) studied word pairs, completed item and association recognition tests, and rated the effectiveness of shallow (e.g., repetition) and deep (e.g., imagery or sentence generation) encoding strategies. Advanced age was associated with reduced working memory (WM) capacity and poorer associative recognition. In addition, reduced WM capacity, beliefs in the utility of ineffective encoding strategies, and lack of endorsement of effective ones were independently associated with impaired associative memory. Thus, maladaptive beliefs about memory in conjunction with reduced cognitive resources account in part for differences in associative memory commonly attributed to aging. PMID:22251381

  17. In search of a recognition memory engram.

    PubMed

    Brown, M W; Banks, P J

    2015-03-01

    A large body of data from human and animal studies using psychological, recording, imaging, and lesion techniques indicates that recognition memory involves at least two separable processes: familiarity discrimination and recollection. Familiarity discrimination for individual visual stimuli seems to be effected by a system centred on the perirhinal cortex of the temporal lobe. The fundamental change that encodes prior occurrence within the perirhinal cortex is a reduction in the responses of neurones when a stimulus is repeated. Neuronal network modelling indicates that a system based on such a change in responsiveness is potentially highly efficient in information theoretic terms. A review is given of findings indicating that perirhinal cortex acts as a storage site for recognition memory of objects and that such storage depends upon processes producing synaptic weakening. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Reduced Recognition of Dynamic Facial Emotional Expressions and Emotion-Specific Response Bias in Children with an Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2015-01-01

    Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…

  19. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  20. On the contribution of unconscious processes to recognition memory.

    PubMed

    Cleary, Anne M

    2012-01-01

    Abstract Voss et al. review work showing unconscious contributions to recognition memory. An electrophysiological effect, the N300, appears to signify an unconscious recognition process. Whether such unconscious recognition requires highly specific experimental circumstances or can occur in typical types of recognition testing situations has remained a question. The fact that the N300 has also been shown to be the sole electrophysiological correlate of the recognition-without-identification effect that occurs with visual word fragments suggests that unconscious processes may contribute to a wider range of recognition testing situations than those originally investigated by Voss and colleagues. Some implications of this possibility are discussed.

  1. Test Sequence Priming in Recognition Memory

    ERIC Educational Resources Information Center

    Johns, Elizabeth E.; Mewhort, D. J. K.

    2009-01-01

    The authors examined priming within the test sequence in 3 recognition memory experiments. A probe primed its successor whenever both probes shared a feature with the same studied item ("interjacent priming"), indicating that the study item like the probe is central to the decision. Interjacent priming occurred even when the 2 probes did…

  2. Automatically Log Off Upon Disappearance of Facial Image

    DTIC Science & Technology

    2005-03-01

    log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only

  3. Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias.

    PubMed

    Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk

    2017-07-01

    This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.

  4. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  5. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    PubMed

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  6. Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2009-12-01

    Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.

  7. Deletion of the GluA1 AMPA receptor subunit impairs recency-dependent object recognition memory

    PubMed Central

    Sanderson, David J.; Hindley, Emma; Smeaton, Emily; Denny, Nick; Taylor, Amy; Barkus, Chris; Sprengel, Rolf; Seeburg, Peter H.; Bannerman, David M.

    2011-01-01

    Deletion of the GluA1 AMPA receptor subunit impairs short-term spatial recognition memory. It has been suggested that short-term recognition depends upon memory caused by the recent presentation of a stimulus that is independent of contextual–retrieval processes. The aim of the present set of experiments was to test whether the role of GluA1 extends to nonspatial recognition memory. Wild-type and GluA1 knockout mice were tested on the standard object recognition task and a context-independent recognition task that required recency-dependent memory. In a first set of experiments it was found that GluA1 deletion failed to impair performance on either of the object recognition or recency-dependent tasks. However, GluA1 knockout mice displayed increased levels of exploration of the objects in both the sample and test phases compared to controls. In contrast, when the time that GluA1 knockout mice spent exploring the objects was yoked to control mice during the sample phase, it was found that GluA1 deletion now impaired performance on both the object recognition and the recency-dependent tasks. GluA1 deletion failed to impair performance on a context-dependent recognition task regardless of whether object exposure in knockout mice was yoked to controls or not. These results demonstrate that GluA1 is necessary for nonspatial as well as spatial recognition memory and plays an important role in recency-dependent memory processes. PMID:21378100

  8. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    PubMed

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  9. Facial expression recognition under partial occlusion based on fusion of global and local features

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  10. Remembering the snake in the grass: Threat enhances recognition but not source memory.

    PubMed

    Meyer, Miriam Magdalena; Bell, Raoul; Buchner, Axel

    2015-12-01

    Research on the influence of emotion on source memory has yielded inconsistent findings. The object-based framework (Mather, 2007) predicts that negatively arousing stimuli attract attention, resulting in enhanced within-object binding, and, thereby, enhanced source memory for intrinsic context features of emotional stimuli. To test this prediction, we presented pictures of threatening and harmless animals, the color of which had been experimentally manipulated. In a memory test, old-new recognition for the animals and source memory for their color was assessed. In all 3 experiments, old-new recognition was better for the more threatening material, which supports previous reports of an emotional memory enhancement. This recognition advantage was due to the emotional properties of the stimulus material, and not specific for snake stimuli. However, inconsistent with the prediction of the object-based framework, intrinsic source memory was not affected by emotion. (c) 2015 APA, all rights reserved).

  11. FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition

    NASA Astrophysics Data System (ADS)

    Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.

    2015-10-01

    In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.

  12. Estradiol and luteinizing hormone regulate recognition memory following subchronic phencyclidine: Evidence for hippocampal GABA action.

    PubMed

    Riordan, Alexander J; Schaler, Ari W; Fried, Jenny; Paine, Tracie A; Thornton, Janice E

    2018-05-01

    The cognitive symptoms of schizophrenia are poorly understood and difficult to treat. Estrogens may mitigate these symptoms via unknown mechanisms. To examine these mechanisms, we tested whether increasing estradiol (E) or decreasing luteinizing hormone (LH) could mitigate short-term episodic memory loss in a phencyclidine (PCP) model of schizophrenia. We then assessed whether changes in cortical or hippocampal GABA may underlie these effects. Female rats were ovariectomized and injected subchronically with PCP. To modulate E and LH, animals received estradiol capsules or Antide injections. Short-term episodic memory was assessed using the novel object recognition task (NORT). Brain expression of GAD67 was analyzed via western blot, and parvalbumin-containing cells were counted using immunohistochemistry. Some rats received hippocampal infusions of a GABA A agonist, GABA A antagonist, or GAD inhibitor before behavioral testing. We found that PCP reduced hippocampal GAD67 and abolished recognition memory. Antide restored hippocampal GAD67 and rescued recognition memory in PCP-treated animals. Estradiol prevented PCP's amnesic effect in NORT but failed to restore hippocampal GAD67. PCP did not cause significant differences in number of parvalbumin-expressing cells or cortical expression of GAD67. Hippocampal infusions of a GABA A agonist restored recognition memory in PCP-treated rats. Blocking hippocampal GAD or GABA A receptors in ovx animals reproduced recognition memory loss similar to PCP and inhibited estradiol's protection of recognition memory in PCP-treated animals. In summary, decreasing LH or increasing E can lessen short-term episodic memory loss, as measured by novel object recognition, in a PCP model of schizophrenia. Alterations in hippocampal GABA may contribute to both PCP's effects on recognition memory and the hormones' ability to prevent or reverse them. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. The beneficial effect of oxytocin on avoidance-related facial emotion recognition depends on early life stress experience.

    PubMed

    Feeser, Melanie; Fan, Yan; Weigand, Anne; Hahn, Adam; Gärtner, Matti; Aust, Sabine; Böker, Heinz; Bajbouj, Malek; Grimm, Simone

    2014-12-01

    Previous studies have shown that oxytocin (OXT) enhances social cognitive processes. It has also been demonstrated that OXT does not uniformly facilitate social cognition. The effects of OXT administration strongly depend on the exposure to stressful experiences in early life. Emotional facial recognition is crucial for social cognition. However, no study has yet examined how the effects of OXT on the ability to identify emotional faces are altered by early life stress (ELS) experiences. Given the role of OXT in modulating social motivational processes, we specifically aimed to investigate its effects on the recognition of approach- and avoidance-related facial emotions. In a double-blind, between-subjects, placebo-controlled design, 82 male participants performed an emotion recognition task with faces taken from the "Karolinska Directed Emotional Faces" set. We clustered the six basic emotions along the dimensions approach (happy, surprise, anger) and avoidance (fear, sadness, disgust). ELS was assessed with the Childhood Trauma Questionnaire (CTQ). Our results showed that OXT improved the ability to recognize avoidance-related emotional faces as compared to approach-related emotional faces. Whereas the performance for avoidance-related emotions in participants with higher ELS scores was comparable in both OXT and placebo condition, OXT enhanced emotion recognition in participants with lower ELS scores. Independent of OXT administration, we observed increased emotion recognition for avoidance-related faces in participants with high ELS scores. Our findings suggest that the investigation of OXT on social recognition requires a broad approach that takes ELS experiences as well as motivational processes into account.

  14. Facial Mimicry and Emotion Consistency: Influences of Memory and Context.

    PubMed

    Kirkham, Alexander J; Hayes, Amy E; Pawling, Ralph; Tipper, Steven P

    2015-01-01

    This study investigates whether mimicry of facial emotions is a stable response or can instead be modulated and influenced by memory of the context in which the emotion was initially observed, and therefore the meaning of the expression. The study manipulated emotion consistency implicitly, where a face expressing smiles or frowns was irrelevant and to be ignored while participants categorised target scenes. Some face identities always expressed emotions consistent with the scene (e.g., smiling with a positive scene), whilst others were always inconsistent (e.g., frowning with a positive scene). During this implicit learning of face identity and emotion consistency there was evidence for encoding of face-scene emotion consistency, with slower RTs, a reduction in trust, and inhibited facial EMG for faces expressing incompatible emotions. However, in a later task where the faces were subsequently viewed expressing emotions with no additional context, there was no evidence for retrieval of prior emotion consistency, as mimicry of emotion was similar for consistent and inconsistent individuals. We conclude that facial mimicry can be influenced by current emotion context, but there is little evidence of learning, as subsequent mimicry of emotionally consistent and inconsistent faces is similar.

  15. Effect of nitrogen narcosis on free recall and recognition memory in open water.

    PubMed

    Hobbs, M; Kneller, W

    2009-01-01

    Previous research has demonstrated that nitrogen narcosis causes decrements in memory performance but the precise aspect of memory impaired is not clear in the literature. The present research investigated the effect of narcosis on free recall and recognition memory by appling signal detection theory (SDT) to the analysis of the recognition data. Using a repeated measures design, the free recall and recognition memory of 20 divers was tested in four learning-recall conditions: shallow-shallow (SS), deep-deep (DD), shallow-deep (SD) and deep-shallow (DS). The data was collected in the ocean offDahab, Egypt with shallow water representing a depth of 0-10m (33ft) and deep water 37-40m (121-131ft). The presence of narcosis was independently indexed with subjective ratings. In comparison to the SS condition there was a clear impairment of free recall in the DD and DS conditions, but not the SD condition. Recognition memory remained unaffected by narcosis. It was concluded narcosis-induced memory decrements cannot be explained as simply an impairment of input into long term memory or of self-guided search and it is suggested instead that narcosis acts to reduce the level of processing/encoding of information.

  16. MicroRNA-132 regulates recognition memory and synaptic plasticity in the perirhinal cortex

    PubMed Central

    Scott, Helen L; Tamagnini, Francesco; Narduzzo, Katherine E; Howarth, Joanna L; Lee, Youn-Bok; Wong, Liang-Fong; Brown, Malcolm W; Warburton, Elizabeth C; Bashir, Zafar I; Uney, James B

    2012-01-01

    Evidence suggests that the acquisition of recognition memory depends upon CREB-dependent long-lasting changes in synaptic plasticity in the perirhinal cortex. The CREB-responsive microRNA miR-132 has been shown to regulate synaptic transmission and we set out to investigate a role for this microRNA in recognition memory and its underlying plasticity mechanisms. To this end we mediated the specific overexpression of miR-132 selectively in the rat perirhinal cortex and demonstrated impairment in short-term recognition memory. This functional deficit was associated with a reduction in both long-term depression and long-term potentiation. These results confirm that microRNAs are key coordinators of the intracellular pathways that mediate experience-dependent changes in the brain. In addition, these results demonstrate a role for miR-132 in the neuronal mechanisms underlying the formation of short-term recognition memory. PMID:22845676

  17. Deficits in Facial Expression Recognition in Male Adolescents with Early-Onset or Adolescence-Onset Conduct Disorder

    ERIC Educational Resources Information Center

    Fairchild, Graeme; Van Goozen, Stephanie H. M.; Calder, Andrew J.; Stollery, Sarah J.; Goodyer, Ian M.

    2009-01-01

    Background: We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that…

  18. A Single-System Model Predicts Recognition Memory and Repetition Priming in Amnesia

    PubMed Central

    Kessels, Roy P.C.; Wester, Arie J.; Shanks, David R.

    2014-01-01

    We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with amnesia categorized pictures of objects at study and then, at test, identified fragmented versions of studied (old) and nonstudied (new) objects (providing a measure of priming), and made a recognition memory judgment (old vs new) for each object. Numerous results in the amnesic patients were predicted in advance by the single-system model, as follows: (1) deficits in recognition memory and priming were evident relative to a control group; (2) items judged as old were identified at greater levels of fragmentation than items judged new, regardless of whether the items were actually old or new; and (3) the magnitude of the priming effect (the identification advantage for old vs new items) overall was greater than that of items judged new. Model evidence measures also favored the single-system model over two formal multiple-systems models. The findings support the single-system model, which explains the pattern of recognition and priming in amnesia primarily as a reduction in the strength of a single dimension of memory strength, rather than a selective explicit memory system deficit. PMID:25122896

  19. Insular Cortex Is Involved in Consolidation of Object Recognition Memory

    ERIC Educational Resources Information Center

    Bermudez-Rattoni, Federico; Okuda, Shoki; Roozendaal, Benno; McGaugh, James L.

    2005-01-01

    Extensive evidence indicates that the insular cortex (IC), also termed gustatory cortex, is critically involved in conditioned taste aversion and taste recognition memory. Although most studies of the involvement of the IC in memory have investigated taste, there is some evidence that the IC is involved in memory that is not based on taste. In…

  20. Executive cognitive functioning and the recognition of facial expressions of emotion in incarcerated violent offenders, non-violent offenders, and controls.

    PubMed

    Hoaken, Peter N S; Allaby, David B; Earle, Jeff

    2007-01-01

    Violence is a social problem that carries enormous costs; however, our understanding of its etiology is quite limited. A large body of research exists, which suggests a relationship between abnormalities of the frontal lobe and aggression; as a result, many researchers have implicated deficits in so-called "executive function" as an antecedent to aggressive behaviour. Another possibility is that violence may be related to problems interpreting facial expressions of emotion, a deficit associated with many forms of psychopathology, and an ability linked to the prefrontal cortex. The current study investigated performance on measures of executive function and on a facial-affect recognition task in 20 violent offenders, 20 non-violent offenders, and 20 controls. In support of our hypotheses, both offender groups performed significantly more poorly on measures of executive function relative to controls. In addition, violent offenders were significantly poorer on the facial-affect recognition task than either of the other two groups. Interestingly, scores on these measures were significantly correlated, with executive deficits associated with difficulties accurately interpreting facial affect. The implications of these results are discussed in terms of a broader understanding of violent behaviour. Copyright 2007 Wiley-Liss, Inc.