Sample records for facial recognition task

  1. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    PubMed

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  2. Glucose enhancement of a facial recognition task in young adults.

    PubMed

    Metzger, M M

    2000-02-01

    Numerous studies have reported that glucose administration enhances memory processes in both elderly and young adult subjects. Although these studies have utilized a variety of procedures and paradigms, investigations of both young and elderly subjects have typically used verbal tasks (word list recall, paragraph recall, etc.). In the present study, the effect of glucose consumption on a nonverbal, facial recognition task in young adults was examined. Lemonade sweetened with either glucose (50 g) or saccharin (23.7 mg) was consumed by college students (mean age of 21.1 years) 15 min prior to a facial recognition task. The task consisted of a familiarization phase in which subjects were presented with "target" faces, followed immediately by a recognition phase in which subjects had to identify the targets among a random array of familiar target and novel "distractor" faces. Statistical analysis indicated that there were no differences on hit rate (target identification) for subjects who consumed either saccharin or glucose prior to the test. However, further analyses revealed that subjects who consumed glucose committed significantly fewer false alarms and had (marginally) higher d-prime scores (a signal detection measure) compared to subjects who consumed saccharin prior to the test. These results parallel a previous report demonstrating glucose enhancement of a facial recognition task in probable Alzheimer's patients; however, this is believed to be the first demonstration of glucose enhancement for a facial recognition task in healthy, young adults.

  3. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    PubMed

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r(2)  = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.

  4. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  5. Face puzzle—two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    PubMed Central

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive

  6. Facial emotion recognition and borderline personality pathology.

    PubMed

    Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio

    2017-09-01

    The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  7. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    PubMed

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  8. Influences on Facial Emotion Recognition in Deaf Children

    ERIC Educational Resources Information Center

    Sidera, Francesc; Amadó, Anna; Martínez, Laura

    2017-01-01

    This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…

  9. Dynamic facial expression recognition based on geometric and texture features

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  10. Facial emotion recognition in patients with focal and diffuse axonal injury.

    PubMed

    Yassin, Walid; Callahan, Brandy L; Ubukata, Shiho; Sugihara, Genichi; Murai, Toshiya; Ueda, Keita

    2017-01-01

    Facial emotion recognition impairment has been well documented in patients with traumatic brain injury. Studies exploring the neural substrates involved in such deficits have implicated specific grey matter structures (e.g. orbitofrontal regions), as well as diffuse white matter damage. Our study aims to clarify whether different types of injuries (i.e. focal vs. diffuse) will lead to different types of impairments on facial emotion recognition tasks, as no study has directly compared these patients. The present study examined performance and response patterns on a facial emotion recognition task in 14 participants with diffuse axonal injury (DAI), 14 with focal injury (FI) and 22 healthy controls. We found that, overall, participants with FI and DAI performed more poorly than controls on the facial emotion recognition task. Further, we observed comparable emotion recognition performance in participants with FI and DAI, despite differences in the nature and distribution of their lesions. However, the rating response pattern between the patient groups was different. This is the first study to show that pure DAI, without gross focal lesions, can independently lead to facial emotion recognition deficits and that rating patterns differ depending on the type and location of trauma.

  11. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    PubMed

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  12. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition

    PubMed Central

    Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans

    2015-01-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602

  13. Interest and attention in facial recognition.

    PubMed

    Burgess, Melinda C R; Weaver, George E

    2003-04-01

    When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.

  14. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    PubMed

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  15. Temporal lobe structures and facial emotion recognition in schizophrenia patients and nonpsychotic relatives.

    PubMed

    Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R

    2011-11-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.

  16. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    PubMed

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  17. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  18. Computer Recognition of Facial Profiles

    DTIC Science & Technology

    1974-08-01

    facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class

  19. Facial emotion recognition is inversely correlated with tremor severity in essential tremor.

    PubMed

    Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G

    2014-04-01

    We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.

  20. Quantifying facial expression recognition across viewing conditions.

    PubMed

    Goren, Deborah; Wilson, Hugh R

    2006-04-01

    Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.

  1. Neuroticism and facial emotion recognition in healthy adults.

    PubMed

    Andric, Sanja; Maric, Nadja P; Knezevic, Goran; Mihaljevic, Marina; Mirjanic, Tijana; Velthorst, Eva; van Os, Jim

    2016-04-01

    The aim of the present study was to examine whether healthy individuals with higher levels of neuroticism, a robust independent predictor of psychopathology, exhibit altered facial emotion recognition performance. Facial emotion recognition accuracy was investigated in 104 healthy adults using the Degraded Facial Affect Recognition Task (DFAR). Participants' degree of neuroticism was estimated using neuroticism scales extracted from the Eysenck Personality Questionnaire and the Revised NEO Personality Inventory. A significant negative correlation between the degree of neuroticism and the percentage of correct answers on DFAR was found only for happy facial expression (significant after applying Bonferroni correction). Altered sensitivity to the emotional context represents a useful and easy way to obtain cognitive phenotype that correlates strongly with inter-individual variations in neuroticism linked to stress vulnerability and subsequent psychopathology. Present findings could have implication in early intervention strategies and staging models in psychiatry. © 2015 Wiley Publishing Asia Pty Ltd.

  2. Specific Impairments in the Recognition of Emotional Facial Expressions in Parkinson’s Disease

    PubMed Central

    Clark, Uraina S.; Neargarder, Sandy; Cronin-Golomb, Alice

    2008-01-01

    Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD. PMID:18485422

  3. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    PubMed Central

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  4. [Prosopagnosia and facial expression recognition].

    PubMed

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  5. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  6. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  7. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  8. Dissociation between facial and bodily expressions in emotion recognition: A case study.

    PubMed

    Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo

    2017-12-21

    Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.

  9. The association between PTSD and facial affect recognition.

    PubMed

    Williams, Christian L; Milanak, Melissa E; Judah, Matt R; Berenbaum, Howard

    2018-05-05

    The major aims of this study were to examine how, if at all, having higher levels of PTSD would be associated with performance on a facial affect recognition task in which facial expressions of emotion are superimposed on emotionally valenced, non-face images. College students with trauma histories (N = 90) completed a facial affect recognition task as well as measures of exposure to traumatic events, and PTSD symptoms. When the face and context matched, participants with higher levels of PTSD were significantly more accurate. When the face and context were mismatched, participants with lower levels of PTSD were more accurate than were those with higher levels of PTSD. These findings suggest that PTSD is associated with how people process affective information. Furthermore, these results suggest that the enhanced attention of people with higher levels of PTSD to affective information can be either beneficial or detrimental to their ability to accurately identify facial expressions of emotion. Limitations, future directions and clinical implications are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. [Developmental change in facial recognition by premature infants during infancy].

    PubMed

    Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu

    2014-09-01

    Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.

  11. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  12. Facial Expression Influences Face Identity Recognition During the Attentional Blink

    PubMed Central

    2014-01-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076

  13. Facial expression influences face identity recognition during the attentional blink.

    PubMed

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  14. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    ERIC Educational Resources Information Center

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  15. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  16. Familial covariation of facial emotion recognition and IQ in schizophrenia.

    PubMed

    Andric, Sanja; Maric, Nadja P; Mihaljevic, Marina; Mirjanic, Tijana; van Os, Jim

    2016-12-30

    Alterations in general intellectual ability and social cognition in schizophrenia are core features of the disorder, evident at the illness' onset and persistent throughout its course. However, previous studies examining cognitive alterations in siblings discordant for schizophrenia yielded inconsistent results. Present study aimed to investigate the nature of the association between facial emotion recognition and general IQ by applying genetically sensitive cross-trait cross-sibling design. Participants (total n=158; patients, unaffected siblings, controls) were assessed using the Benton Facial Recognition Test, the Degraded Facial Affect Recognition Task (DFAR) and the Wechsler Adult Intelligence Scale-III. Patients had lower IQ and altered facial emotion recognition in comparison to other groups. Healthy siblings and controls did not significantly differ in IQ and DFAR performance, but siblings exhibited intermediate angry facial expression recognition. Cross-trait within-subject analyses showed significant associations between overall DFAR performance and IQ in all participants. Within-trait cross-sibling analyses found significant associations between patients' and siblings' IQ and overall DFAR performance, suggesting their familial clustering. Finally, cross-trait cross-sibling analyses revealed familial covariation of facial emotion recognition and IQ in siblings discordant for schizophrenia, further indicating their familial etiology. Both traits are important phenotypes for genetic studies and potential early clinical markers of schizophrenia-spectrum disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Targeting specific facial variation for different identification tasks.

    PubMed

    Aeria, Gillian; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    A conceptual framework that allows faces to be studied and compared objectively with biological validity is presented. The framework is a logical extension of modern morphometrics and statistical shape analysis techniques. Three dimensional (3D) facial scans were collected from 255 healthy young adults. One scan depicted a smiling facial expression and another scan depicted a neutral expression. These facial scans were modelled in a Principal Component Analysis (PCA) space where Euclidean (ED) and Mahalanobis (MD) distances were used to form similarity measures. Within this PCA space, property pathways were calculated that expressed the direction of change in facial expression. Decomposition of distances into property-independent (D1) and dependent components (D2) along these pathways enabled the comparison of two faces in terms of the extent of a smiling expression. The performance of all distances was tested and compared in dual types of experiments: Classification tasks and a Recognition task. In the Classification tasks, individual facial scans were assigned to one or more population groups of smiling or neutral scans. The property-dependent (D2) component of both Euclidean and Mahalanobis distances performed best in the Classification task, by correctly assigning 99.8% of scans to the right population group. The recognition task tested if a scan of an individual depicting a smiling/neutral expression could be positively identified when shown a scan of the same person depicting a neutral/smiling expression. ED1 and MD1 performed best, and correctly identified 97.8% and 94.8% of individual scans respectively as belonging to the same person despite differences in facial expression. It was concluded that decomposed components are superior to straightforward distances in achieving positive identifications and presents a novel method for quantifying facial similarity. Additionally, although the undecomposed Mahalanobis distance often used in practice outperformed

  18. Facial recognition deficits as a potential endophenotype in bipolar disorder.

    PubMed

    Vierck, Esther; Porter, Richard J; Joyce, Peter R

    2015-11-30

    Bipolar disorder (BD) is considered a highly heritable and genetically complex disorder. Several cognitive functions, such as executive functions and verbal memory have been suggested as promising candidates for endophenotypes. Although there is evidence for deficits in facial emotion recognition in individuals with BD, studies investigating these functions as endophenotypes are rare. The current study investigates emotion recognition as a potential endophenotype in BD by comparing 36 BD participants, 24 of their 1st degree relatives and 40 healthy control participants in a computerised facial emotion recognition task. Group differences were evaluated using repeated measurement analysis of co-variance with age as a covariate. Results revealed slowed emotion recognition for both BD and their relatives. Furthermore, BD participants were less accurate than healthy controls in their recognition of emotion expressions. We found no evidence of emotion specific differences between groups. Our results provide evidence for facial recognition as a potential endophenotype in BD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Intact anger recognition in depression despite aberrant visual facial information usage.

    PubMed

    Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M

    2014-08-01

    Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder

    PubMed Central

    Goghari, Vina M; Sponheim, Scott R

    2012-01-01

    Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816

  1. Biases in facial and vocal emotion recognition in chronic schizophrenia

    PubMed Central

    Dondaine, Thibaut; Robert, Gabriel; Péron, Julie; Grandjean, Didier; Vérin, Marc; Drapier, Dominique; Millet, Bruno

    2014-01-01

    There has been extensive research on impaired emotion recognition in schizophrenia in the facial and vocal modalities. The literature points to biases toward non-relevant emotions for emotional faces but few studies have examined biases in emotional recognition across different modalities (facial and vocal). In order to test emotion recognition biases, we exposed 23 patients with stabilized chronic schizophrenia and 23 healthy controls (HCs) to emotional facial and vocal tasks asking them to rate emotional intensity on visual analog scales. We showed that patients with schizophrenia provided higher intensity ratings on the non-target scales (e.g., surprise scale for fear stimuli) than HCs for the both tasks. Furthermore, with the exception of neutral vocal stimuli, they provided the same intensity ratings on the target scales as the HCs. These findings suggest that patients with chronic schizophrenia have emotional biases when judging emotional stimuli in the visual and vocal modalities. These biases may stem from a basic sensorial deficit, a high-order cognitive dysfunction, or both. The respective roles of prefrontal-subcortical circuitry and the basal ganglia are discussed. PMID:25202287

  2. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  3. Autonomic imbalance is associated with reduced facial recognition in somatoform disorders.

    PubMed

    Pollatos, Olga; Herbert, Beate M; Wankner, Sarah; Dietel, Anja; Wachsmuth, Cornelia; Henningsen, Peter; Sack, Martin

    2011-10-01

    Somatoform disorders are characterized by the presence of multiple somatic symptoms. While the accuracy of perceiving bodily signal (interoceptive awareness) is only sparely investigated in somatoform disorders, recent research has associated autonomic imbalance with cognitive and emotional difficulties in stress-related diseases. This study aimed to investigate how sympathovagal reactivity interacts with performance in recognizing emotions in faces (facial recognition task). Using a facial recognition and appraisal task, skin conductance levels (SCLs), heart rate (HR) and heart rate variability (HRV) were assessed in 26 somatoform patients and compared to healthy controls. Interoceptive awareness was assessed by a heartbeat detection task. We found evidence for a sympathovagal imbalance in somatoform disorders characterized by low parasympathetic reactivity during emotional tasks and increased sympathetic activation during baseline. Somatoform patients exhibited a reduced recognition performance for neutral and sad emotional expressions only. Possible confounding variables such as alexithymia, anxiety or depression were taken into account. Interoceptive awareness was reduced in somatoform patients. Our data demonstrate an imbalance in sympathovagal activation in somatoform disorders associated with decreased parasympathetic activation. This might account for difficulties in processing of sad and neutral facial expressions in somatoform patients which might be a pathogenic mechanism for increased everyday vulnerability. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    PubMed

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  6. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  7. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  8. Biometrics: A Look at Facial Recognition

    DTIC Science & Technology

    a facial recognition system in the city’s Oceanfront tourist area. The system has been tested and has recently been fully implemented. Senator...Kenneth W. Stolle, the Chairman of the Virginia State Crime Commission, established a Facial Recognition Technology Sub-Committee to examine the issue of... facial recognition technology. This briefing begins by defining biometrics and discussing examples of the technology. It then explains how biometrics

  9. Emotional recognition from dynamic facial, vocal and musical expressions following traumatic brain injury.

    PubMed

    Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle

    2017-01-01

    To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.

  10. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  11. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  12. Test battery for measuring the perception and recognition of facial expressions of emotion

    PubMed Central

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  13. Neuroanatomical substrates involved in unrelated false facial recognition.

    PubMed

    Ronzon-Gonzalez, Eliane; Hernandez-Castillo, Carlos R; Pasaye, Erick H; Vaca-Palomares, Israel; Fernandez-Ruiz, Juan

    2017-11-22

    Identifying faces is a process central for social interaction and a relevant factor in eyewitness theory. False recognition is a critical mistake during an eyewitness's identification scenario because it can lead to a wrongful conviction. Previous studies have described neural areas related to false facial recognition using the standard Deese/Roediger-McDermott (DRM) paradigm, triggering related false recognition. Nonetheless, misidentification of faces without trying to elicit false memories (unrelated false recognition) in a police lineup could involve different cognitive processes, and distinct neural areas. To delve into the neural circuitry of unrelated false recognition, we evaluated the memory and response confidence of participants while watching faces photographs in an fMRI task. Functional activations of unrelated false recognition were identified by contrasting the activation on this condition vs. the activations related to recognition (hits) and correct rejections. The results identified the right precentral and cingulate gyri as areas with distinctive activations during false recognition events suggesting a conflict resulting in a dysfunction during memory retrieval. High confidence suggested that about 50% of misidentifications may be related to an unconscious process. These findings add to our understanding of the construction of facial memories and its biological basis, and the fallibility of the eyewitness testimony.

  14. Facial recognition in children after perinatal stroke.

    PubMed

    Ballantyne, A O; Trauner, D A

    1999-04-01

    To examine the effects of prenatal or perinatal stroke on the facial recognition skills of children and young adults. It was hypothesized that the nature and extent of facial recognition deficits seen in patients with early-onset lesions would be different from that seen in adults with later-onset neurologic impairment. Numerous studies with normal and neurologically impaired adults have found a right-hemisphere superiority for facial recognition. In contrast, little is known about facial recognition in children after early focal brain damage. Forty subjects had single, unilateral brain lesions from pre- or perinatal strokes (20 had left-hemisphere damage, and 20 had right-hemisphere damage), and 40 subjects were controls who were individually matched to the lesion subjects on the basis of age, sex, and socioeconomic status. Each subject was given the Short-Form of Benton's Test of Facial Recognition. Data were analyzed using the Wilcoxon matched-pairs signed-rank test and multiple regression. The lesion subjects performed significantly more poorly than did matched controls. There was no clear-cut lateralization effect, with the left-hemisphere group performing significantly more poorly than matched controls and the right-hemisphere group showing a trend toward poorer performance. Parietal lobe involvement, regardless of lesion side, adversely affected facial recognition performance in the lesion group. Results could not be accounted for by IQ differences between lesion and control groups, nor was lesion severity systematically related to facial recognition performance. Pre- or perinatal unilateral brain damage results in a subtle disturbance in facial recognition ability, independent of the side of the lesion. Parietal lobe involvement, in particular, has an adverse effect on facial recognition skills. These findings suggest that the parietal lobes may be involved in the acquisition of facial recognition ability from a very early point in brain development, but

  15. [Recognition of facial emotions and theory of mind in schizophrenia: could the theory of mind deficit be due to the non-recognition of facial emotions?].

    PubMed

    Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F

    2012-06-01

    The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the

  16. Interference with facial emotion recognition by verbal but not visual loads.

    PubMed

    Reed, Phil; Steed, Ian

    2015-12-01

    The ability to recognize emotions through facial characteristics is critical for social functioning, but is often impaired in those with a developmental or intellectual disability. The current experiments explored the degree to which interfering with the processing capacities of typically-developing individuals would produce a similar inability to recognize emotions through the facial elements of faces displaying particular emotions. It was found that increasing the cognitive load (in an attempt to model learning impairments in a typically developing population) produced deficits in correctly identifying emotions from facial elements. However, this effect was much more pronounced when using a concurrent verbal task than when employing a concurrent visual task, suggesting that there is a substantial verbal element to the labeling and subsequent recognition of emotions. This concurs with previous work conducted with those with developmental disabilities that suggests emotion recognition deficits are connected with language deficits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    PubMed

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the

  18. [Neurological disease and facial recognition].

    PubMed

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  19. Deficits in recognition, identification, and discrimination of facial emotions in patients with bipolar disorder.

    PubMed

    Benito, Adolfo; Lahera, Guillermo; Herrera, Sara; Muncharaz, Ramón; Benito, Guillermo; Fernández-Liria, Alberto; Montes, José Manuel

    2013-01-01

    To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD). Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40), identification (Facial Emotion Identification Test - FEIT), and discrimination (Facial Emotion Discrimination Test - FEDT) of facial emotions, as well as a theory of mind (ToM) verbal test (Hinting Task). Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Patients with BD scored significantly lower than controls on recognition (ER40), identification (FEIT), and discrimination (FEDT) of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST) was found. These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.

  20. Differential amygdala response during facial recognition in patients with schizophrenia: an fMRI study.

    PubMed

    Kosaka, H; Omori, M; Murata, T; Iidaka, T; Yamada, H; Okada, T; Takahashi, T; Sadato, N; Itoh, H; Yonekura, Y; Wada, Y

    2002-09-01

    Human lesion or neuroimaging studies suggest that amygdala is involved in facial emotion recognition. Although impairments in recognition of facial and/or emotional expression have been reported in schizophrenia, there are few neuroimaging studies that have examined differential brain activation during facial recognition between patients with schizophrenia and normal controls. To investigate amygdala responses during facial recognition in schizophrenia, we conducted a functional magnetic resonance imaging (fMRI) study with 12 right-handed medicated patients with schizophrenia and 12 age- and sex-matched healthy controls. The experiment task was a type of emotional intensity judgment task. During the task period, subjects were asked to view happy (or angry/disgusting/sad) and neutral faces simultaneously presented every 3 s and to judge which face was more emotional (positive or negative face discrimination). Imaging data were investigated in voxel-by-voxel basis for single-group analysis and for between-group analysis according to the random effect model using Statistical Parametric Mapping (SPM). No significant difference in task accuracy was found between the schizophrenic and control groups. Positive face discrimination activated the bilateral amygdalae of both controls and schizophrenics, with more prominent activation of the right amygdala shown in the schizophrenic group. Negative face discrimination activated the bilateral amygdalae in the schizophrenic group whereas the right amygdala alone in the control group, although no significant group difference was found. Exaggerated amygdala activation during emotional intensity judgment found in the schizophrenic patients may reflect impaired gating of sensory input containing emotion. Copyright 2002 Elsevier Science B.V.

  1. Oxytocin Promotes Facial Emotion Recognition and Amygdala Reactivity in Adults with Asperger Syndrome

    PubMed Central

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-01-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS. PMID:24067301

  2. Oxytocin promotes facial emotion recognition and amygdala reactivity in adults with asperger syndrome.

    PubMed

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-02-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.

  3. Deficits in facial affect recognition among antisocial populations: a meta-analysis.

    PubMed

    Marsh, Abigail A; Blair, R J R

    2008-01-01

    Individuals with disorders marked by antisocial behavior frequently show deficits in recognizing displays of facial affect. Antisociality may be associated with specific deficits in identifying fearful expressions, which would implicate dysfunction in neural structures that subserve fearful expression processing. A meta-analysis of 20 studies was conducted to assess: (a) if antisocial populations show any consistent deficits in recognizing six emotional expressions; (b) beyond any generalized impairment, whether specific fear recognition deficits are apparent; and (c) if deficits in fear recognition are a function of task difficulty. Results show a robust link between antisocial behavior and specific deficits in recognizing fearful expressions. This impairment cannot be attributed solely to task difficulty. These results suggest dysfunction among antisocial individuals in specified neural substrates, namely the amygdala, involved in processing fearful facial affect.

  4. Facial Emotion Recognition by Persons with Mental Retardation: A Review of the Experimental Literature.

    ERIC Educational Resources Information Center

    Rojahn, Johannes; And Others

    1995-01-01

    This literature review discusses 21 studies on facial emotion recognition by persons with mental retardation in terms of methodological characteristics, stimulus material, salient variables and their relation to recognition tasks, and emotion recognition deficits in mental retardation. A table provides comparative data on all 21 studies. (DB)

  5. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  6. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    PubMed

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. The familial basis of facial emotion recognition deficits in adolescents with conduct disorder and their unaffected relatives.

    PubMed

    Sully, K; Sonuga-Barke, E J S; Fairchild, G

    2015-07-01

    There is accumulating evidence of impairments in facial emotion recognition in adolescents with conduct disorder (CD). However, the majority of studies in this area have only been able to demonstrate an association, rather than a causal link, between emotion recognition deficits and CD. To move closer towards understanding the causal pathways linking emotion recognition problems with CD, we studied emotion recognition in the unaffected first-degree relatives of CD probands, as well as those with a diagnosis of CD. Using a family-based design, we investigated facial emotion recognition in probands with CD (n = 43), their unaffected relatives (n = 21), and healthy controls (n = 38). We used the Emotion Hexagon task, an alternative forced-choice task using morphed facial expressions depicting the six primary emotions, to assess facial emotion recognition accuracy. Relative to controls, the CD group showed impaired recognition of anger, fear, happiness, sadness and surprise (all p < 0.005). Similar to probands with CD, unaffected relatives showed deficits in anger and happiness recognition relative to controls (all p < 0.008), with a trend toward a deficit in fear recognition. There were no significant differences in performance between the CD probands and the unaffected relatives following correction for multiple comparisons. These results suggest that facial emotion recognition deficits are present in adolescents who are at increased familial risk for developing antisocial behaviour, as well as those who have already developed CD. Consequently, impaired emotion recognition appears to be a viable familial risk marker or candidate endophenotype for CD.

  8. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  9. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    PubMed Central

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and

  10. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    PubMed Central

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  11. Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.

    PubMed

    Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T

    2013-01-01

    Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.

  12. Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness.

    PubMed

    Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto

    2012-01-01

    Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.

  13. [Emotional facial expression recognition impairment in Parkinson disease].

    PubMed

    Lachenal-Chevallet, Karine; Bediou, Benoit; Bouvard, Martine; Thobois, Stéphane; Broussolle, Emmanuel; Vighetto, Alain; Krolak-Salmon, Pierre

    2006-03-01

    some behavioral disturbances observed in Parkinson's disease (PD) could be related to impaired recognition of various social messages particularly emotional facial expressions. facial expression recognition was assessed using morphed faces (five emotions: happiness, fear, anger, disgust, neutral), and compared to gender recognition and general cognitive assessment in 12 patients with Parkinson's disease and 14 controls subjects. facial expression recognition was impaired among patients, whereas gender recognitions, visuo-perceptive capacities and total efficiency were preserved. Post hoc analyses disclosed a deficit for fear and disgust recognition compared to control subjects. the impairment of emotional facial expression recognition in PD appears independent of other cognitive deficits. This impairment may be related to the dopaminergic depletion in basal ganglia and limbic brain regions. They could take a part in psycho-behavioral disorders and particularly in communication disorders observed in Parkinson's disease patients.

  14. A facial expression of pax: Assessing children's "recognition" of emotion from faces.

    PubMed

    Nelson, Nicole L; Russell, James A

    2016-01-01

    In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  16. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  17. Mapping structural covariance networks of facial emotion recognition in early psychosis: A pilot study.

    PubMed

    Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean

    2017-11-01

    People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Facial Recognition Training: Improving Intelligence Collection by Soldiers

    DTIC Science & Technology

    2008-01-01

    Facial Recognition Training: Improving Intelligence Collection by Soldiers By: 2LT Michael Mitchell, MI, ALARNG “In combat, you don’t rise to...technology, but on patrol a Soldier cannot use a device as quickly as simply looking at the subject. Why is Facial Recognition Difficult? Soldiers...00-2008 to 00-00-2008 4. TITLE AND SUBTITLE Facial Recognition Training: Improving Intelligence Collection by Soldiers 5a. CONTRACT NUMBER 5b

  19. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    PubMed Central

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  20. Impaired recognition of happy facial expressions in bipolar disorder.

    PubMed

    Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M

    2014-08-01

    The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.

  1. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  2. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  3. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  4. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: Meta-analytic findings

    PubMed Central

    Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.

    2014-01-01

    Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469

  5. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: meta-analytic findings.

    PubMed

    Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S

    2013-12-01

    In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.

  6. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  7. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  8. Changes in brain activation during working memory and facial recognition tasks in patients with bipolar disorder with Lamotrigine monotherapy.

    PubMed

    Haldane, Morgan; Jogia, Jigar; Cobb, Annabel; Kozuch, Eliza; Kumari, Veena; Frangou, Sophia

    2008-01-01

    Verbal working memory and emotional self-regulation are impaired in Bipolar Disorder (BD). Our aim was to investigate the effect of Lamotrigine (LTG), which is effective in the clinical management of BD, on the neural circuits subserving working memory and emotional processing. Functional Magnetic Resonance Imaging data from 12 stable BD patients was used to detect LTG-induced changes as the differences in brain activity between drug-free and post-LTG monotherapy conditions during a verbal working memory (N-back sequential letter task) and an angry facial affect recognition task. For both tasks, LGT monotherapy compared to baseline was associated with increased activation mostly within the prefrontal cortex and cingulate gyrus, in regions normally engaged in verbal working memory and emotional processing. Therefore, LTG monotherapy in BD patients may enhance cortical function within neural circuits involved in memory and emotional self-regulation.

  9. Facial expression recognition based on improved deep belief networks

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  10. Assessing the Utility of a Virtual Environment for Enhancing Facial Affect Recognition in Adolescents with Autism

    ERIC Educational Resources Information Center

    Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan

    2014-01-01

    Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…

  11. Laptop Computer - Based Facial Recognition System Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master

  12. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  13. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  14. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    ERIC Educational Resources Information Center

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  15. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  16. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  17. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  18. Influence of make-up on facial recognition.

    PubMed

    Ueda, Sayako; Koyama, Takamasa

    2010-01-01

    Make-up may enhance or disguise facial characteristics. The influence of wearing make-up on facial recognition could be of two kinds: (i) when women do not wear make-up and then are seen with make-up, and (ii) when women wear make-up and then are seen without make-up. A study is reported which shows that light make-up makes it easier to recognise a face, and heavy make-up makes it more difficult. Seeing initially a made-up face makes any subsequent facial recognition more difficult than initially seeing that face without make-up.

  19. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.

  20. The Reliability of Facial Recognition of Deceased Persons on Photographs.

    PubMed

    Caplova, Zuzana; Obertova, Zuzana; Gibelli, Daniele M; Mazzarelli, Debora; Fracasso, Tony; Vanezis, Peter; Sforza, Chiarella; Cattaneo, Cristina

    2017-09-01

    In humanitarian emergencies, such as the current deceased migrants in the Mediterranean, antemortem documentation needed for identification may be limited. The use of visual identification has been previously reported in cases of mass disasters such as Thai tsunami. This pilot study explores the ability of observers to match unfamiliar faces of living and dead persons and whether facial morphology can be used for identification. A questionnaire was given to 41 students and five professionals in the field of forensic identification with the task to choose whether a facial photograph corresponds to one of the five photographs in a lineup and to identify the most useful features used for recognition. Although the overall recognition score did not significantly differ between professionals and students, the median scores of 78.1% and 80.0%, respectively, were too low to consider this method as a reliable identification method and thus needs to be supported by other means. © 2017 American Academy of Forensic Sciences.

  1. Development of emotional facial recognition in late childhood and adolescence.

    PubMed

    Thomas, Laura A; De Bellis, Michael D; Graham, Reiko; LaBar, Kevin S

    2007-09-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents and adults on a two-alternative forced-choice discrimination task using morphed faces that varied in emotional content. Actors appeared to pose expressions that changed incrementally along three progressions: neutral-to-fear, neutral-to-anger, and fear-to-anger. Across all three morph types, adults displayed more sensitivity to subtle changes in emotional expression than children and adolescents. Fear morphs and fear-to-anger blends showed a linear developmental trajectory, whereas anger morphs showed a quadratic trend, increasing sharply from adolescents to adults. The results provide evidence for late developmental changes in emotional expression recognition with some specificity in the time course for distinct emotions.

  2. Violent video game players and non-players differ on facial emotion recognition.

    PubMed

    Diaz, Ruth L; Wong, Ulric; Hodgins, David C; Chiu, Carina G; Goghari, Vina M

    2016-01-01

    Violent video game playing has been associated with both positive and negative effects on cognition. We examined whether playing two or more hours of violent video games a day, compared to not playing video games, was associated with a different pattern of recognition of five facial emotions, while controlling for general perceptual and cognitive differences that might also occur. Undergraduate students were categorized as violent video game players (n = 83) or non-gamers (n = 69) and completed a facial recognition task, consisting of an emotion recognition condition and a control condition of gender recognition. Additionally, participants completed questionnaires assessing their video game and media consumption, aggression, and mood. Violent video game players recognized fearful faces both more accurately and quickly and disgusted faces less accurately than non-gamers. Desensitization to violence, constant exposure to fear and anxiety during game playing, and the habituation to unpleasant stimuli, are possible mechanisms that could explain these results. Future research should evaluate the effects of violent video game playing on emotion processing and social cognition more broadly. © 2015 Wiley Periodicals, Inc.

  3. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    PubMed Central

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A.; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual

  4. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors.

    PubMed

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims' recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims' performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual

  5. Impact of Social Cognition on Alcohol Dependence Treatment Outcome: Poorer Facial Emotion Recognition Predicts Relapse/Dropout.

    PubMed

    Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang

    2017-12-01

    Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of

  6. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  7. Assessment of perception of morphed facial expressions using the Emotion Recognition Task: normative data from healthy participants aged 8-75.

    PubMed

    Kessels, Roy P C; Montagne, Barbara; Hendriks, Angelique W; Perrett, David I; de Haan, Edward H F

    2014-03-01

    The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognition Task (ERT), was developed to overcome these difficulties. In this study, we examined the effects of age, sex, and intellectual ability on emotion perception using the ERT. In this test, emotional facial expressions are presented as morphs gradually expressing one of the six basic emotions from neutral to four levels of intensity (40%, 60%, 80%, and 100%). The task was administered in 373 healthy participants aged 8-75. In children aged 8-17, only small developmental effects were found for the emotions anger and happiness, in contrast to adults who showed age-related decline on anger, fear, happiness, and sadness. Sex differences were present predominantly in the adult participants. IQ only minimally affected the perception of disgust in the children, while years of education were correlated with all emotions but surprise and disgust in the adult participants. A regression-based approach was adopted to present age- and education- or IQ-adjusted normative data for use in clinical practice. Previous studies using the ERT have demonstrated selective impairments on specific emotions in a variety of psychiatric, neurologic, or neurodegenerative patient groups, making the ERT a valuable addition to existing paradigms for the assessment of emotion perception. © 2013 The British Psychological Society.

  8. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  10. Facial recognition of happiness among older adults with active and remitted major depression.

    PubMed

    Shiroma, Paulo R; Thuras, Paul; Johns, Brian; Lim, Kelvin O

    2016-09-30

    Biased emotion processing in depression might be a trait characteristic independent of mood improvement and a vulnerable factor to develop further depressive episodes. This phenomenon of among older adults with depression has not been adequately examined. In a 2-year cross-sectional study, 59 older patients with either active or remitted major depression, or never-depressed, completed a facial emotion recognition task (FERT) to probe perceptual bias of happiness. The results showed that depressed patients, compared with never depressed subjects, had a significant lower sensitivity to identify happiness particularly at moderate intensity of facial stimuli. Patients in remission from a previous major depressive episode but with none or minimal symptoms had similar sensitivity rate to identify happy facial expressions as compared to patients with an active depressive episode. Further studies would be necessary to confirm whether recognition of happy expression reflects a persistent perceptual bias of major depression in older adults. Published by Elsevier Ireland Ltd.

  11. Recognition of facial and musical emotions in Parkinson's disease.

    PubMed

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  12. Plastic surgery and the biometric e-passport: implications for facial recognition.

    PubMed

    Ologunde, Rele

    2015-04-01

    This correspondence comments on the challenges of plastic reconstructive and aesthetic surgery on the facial recognition algorithms employed by biometric passports. The limitations of facial recognition technology in patients who have undergone facial plastic surgery are also discussed. Finally, the advice of the UK HM passport office to people who undergo facial surgery is reported.

  13. The relationship between facial emotion recognition and executive functions in first-episode patients with schizophrenia and their siblings.

    PubMed

    Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng

    2015-10-08

    Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P < .05). There were no correlations between executive functions and facial emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.

  14. Facial Affect Recognition and Social Anxiety in Preschool Children

    ERIC Educational Resources Information Center

    Ale, Chelsea M.; Chorney, Daniel B.; Brice, Chad S.; Morris, Tracy L.

    2010-01-01

    Research relating anxiety and facial affect recognition has focused mostly on school-aged children and adults and has yielded mixed results. The current study sought to demonstrate an association among behavioural inhibition and parent-reported social anxiety, shyness, social withdrawal and facial affect recognition performance in 30 children,…

  15. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.

  16. Detecting facial emotion recognition deficits in schizophrenia using dynamic stimuli of varying intensities.

    PubMed

    Hargreaves, A; Mothersill, O; Anderson, M; Lawless, S; Corvin, A; Donohoe, G

    2016-10-28

    Deficits in facial emotion recognition have been associated with functional impairments in patients with Schizophrenia (SZ). Whilst a strong ecological argument has been made for the use of both dynamic facial expressions and varied emotion intensities in research, SZ emotion recognition studies to date have primarily used static stimuli of a singular, 100%, intensity of emotion. To address this issue, the present study aimed to investigate accuracy of emotion recognition amongst patients with SZ and healthy subjects using dynamic facial emotion stimuli of varying intensities. To this end an emotion recognition task (ERT) designed by Montagne (2007) was adapted and employed. 47 patients with a DSM-IV diagnosis of SZ and 51 healthy participants were assessed for emotion recognition. Results of the ERT were tested for correlation with performance in areas of cognitive ability typically found to be impaired in psychosis, including IQ, memory, attention and social cognition. Patients were found to perform less well than healthy participants at recognising each of the 6 emotions analysed. Surprisingly, however, groups did not differ in terms of impact of emotion intensity on recognition accuracy; for both groups higher intensity levels predicted greater accuracy, but no significant interaction between diagnosis and emotional intensity was found for any of the 6 emotions. Accuracy of emotion recognition was, however, more strongly correlated with cognition in the patient cohort. Whilst this study demonstrates the feasibility of using ecologically valid dynamic stimuli in the study of emotion recognition accuracy, varying the intensity of the emotion displayed was not demonstrated to impact patients and healthy participants differentially, and thus may not be a necessary variable to include in emotion recognition research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Affective theory of mind inferences contextually influence the recognition of emotional facial expressions.

    PubMed

    Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J

    2018-03-14

    The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.

  18. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  19. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  20. Effects of Minority Status on Facial Recognition and Naming Performance.

    ERIC Educational Resources Information Center

    Roberts, Richard J.; Hamsher, Kerry

    1984-01-01

    Examined the differential effects of minority status in Blacks (N=94) on a facial recognition test and a naming test. Results showed that performance on the facial recognition test was relatively free of racial bias, but this was not the case for visual naming. (LLL)

  1. Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.

    PubMed

    Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L

    2018-01-01

    Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p  = 0.04; d  = 0.34), had increased response bias toward fear ( p  = 0.03), and increased response time to facial emotions as a whole ( p  = 0.02; d  = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.

  2. A voxel-based lesion study on facial emotion recognition after penetrating brain injury

    PubMed Central

    Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan

    2013-01-01

    The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440

  3. Visual Scan Paths and Recognition of Facial Identity in Autism Spectrum Disorder and Typical Development

    PubMed Central

    Wilson, C. Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378

  4. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Marcia L.; Erikson, Rebecca L.; Lombardo, Nicholas J.

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available tomore » other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.« less

  5. The recognition of facial emotion expressions in Parkinson's disease.

    PubMed

    Assogna, Francesca; Pontieri, Francesco E; Caltagirone, Carlo; Spalletta, Gianfranco

    2008-11-01

    A limited number of studies in Parkinson's Disease (PD) suggest a disturbance of recognition of facial emotion expressions. In particular, disgust recognition impairment has been reported in unmedicated and medicated PD patients. However, the results are rather inconclusive in the definition of the degree and the selectivity of emotion recognition impairment, and an associated impairment of almost all basic facial emotions in PD is also described. Few studies have investigated the relationship with neuropsychiatric and neuropsychological symptoms with mainly negative results. This inconsistency may be due to many different problems, such as emotion assessment, perception deficit, cognitive impairment, behavioral symptoms, illness severity and antiparkinsonian therapy. Here we review the clinical characteristics and neural structures involved in the recognition of specific facial emotion expressions, and the plausible role of dopamine transmission and dopamine replacement therapy in these processes. It is clear that future studies should be directed to clarify all these issues.

  6. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-03-19

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  7. Facial emotion recognition ability: psychiatry nurses versus nurses from other departments.

    PubMed

    Gultekin, Gozde; Kincir, Zeliha; Kurt, Merve; Catal, Yasir; Acil, Asli; Aydin, Aybike; Özcan, Mualla; Delikkaya, Busra N; Kacar, Selma; Emul, Murat

    2016-12-01

    Facial emotion recognition is a basic element in non-verbal communication. Although some researchers have shown that recognizing facial expressions may be important in the interaction between doctors and patients, there are no studies concerning facial emotion recognition in nurses. Here, we aimed to investigate facial emotion recognition ability in nurses and compare the abilities between nurses from psychiatry and other departments. In this cross-sectional study, sixty seven nurses were divided into two groups according to their departments: psychiatry (n=31); and, other departments (n=36). A Facial Emotion Recognition Test, constructed from a set of photographs from Ekman and Friesen's book "Pictures of Facial Affect", was administered to all participants. In whole group, the highest mean accuracy rate of recognizing facial emotion was the happy (99.14%) while the lowest accurately recognized facial expression was fear (47.71%). There were no significant differences between two groups among mean accuracy rates in recognizing happy, sad, fear, angry, surprised facial emotion expressions (for all, p>0.05). The ability of recognizing disgusted and neutral facial emotions tended to be better in other nurses than psychiatry nurses (p=0.052 and p=0.053, respectively) Conclusion: This study was the first that revealed indifference in the ability of FER between psychiatry nurses and non-psychiatry nurses. In medical education curricula throughout the world, no specific training program is scheduled for recognizing emotional cues of patients. We considered that improving the ability of recognizing facial emotion expression in medical stuff might be beneficial in reducing inappropriate patient-medical stuff interaction.

  8. Recognition of facial emotions in neuropsychiatric disorders.

    PubMed

    Kohler, Christian G; Turner, Travis H; Gur, Raquel E; Gur, Ruben C

    2004-04-01

    Recognition of facial emotions represents an important aspect of interpersonal communication and is governed by select neural substrates. We present data on emotion recognition in healthy young adults utilizing a novel set of color photographs of evoked universal emotions. In addition, we review the recent literature on emotion recognition in psychiatric and neurologic disorders, and studies that compare different disorders.

  9. Associations between facial emotion recognition and young adolescents’ behaviors in bullying

    PubMed Central

    Gini, Gianluca; Altoè, Gianmarco

    2017-01-01

    This study investigated whether different behaviors young adolescents can act during bullying episodes were associated with their ability to recognize morphed facial expressions of the six basic emotions, expressed at high and low intensity. The sample included 117 middle-school students (45.3% girls; mean age = 12.4 years) who filled in a peer nomination questionnaire and individually performed a computerized emotion recognition task. Bayesian generalized mixed-effects models showed a complex picture, in which type and intensity of emotions, students’ behavior and gender interacted in explaining recognition accuracy. Results were discussed with a particular focus on negative emotions and suggesting a “neutral” nature of emotion recognition ability, which does not necessarily lead to moral behavior but can also be used for pursuing immoral goals. PMID:29131871

  10. Use of Facial Recognition Software to Identify Disaster Victims With Facial Injuries.

    PubMed

    Broach, John; Yong, Rothsovann; Manuell, Mary-Elise; Nichols, Constance

    2017-10-01

    After large-scale disasters, victim identification frequently presents a challenge and a priority for responders attempting to reunite families and ensure proper identification of deceased persons. The purpose of this investigation was to determine whether currently commercially available facial recognition software can successfully identify disaster victims with facial injuries. Photos of 106 people were taken before and after application of moulage designed to simulate traumatic facial injuries. These photos as well as photos from volunteers' personal photo collections were analyzed by using facial recognition software to determine whether this technology could accurately identify a person with facial injuries. The study results suggest that a responder could expect to get a correct match between submitted photos and photos of injured patients between 39% and 45% of the time and a much higher percentage of correct returns if submitted photos were of optimal quality with percentages correct exceeding 90% in most situations. The present results suggest that the use of this software would provide significant benefit to responders. Although a correct result was returned only 40% of the time, this would still likely represent a benefit for a responder trying to identify hundreds or thousands of victims. (Disaster Med Public Health Preparedness. 2017;11:568-572).

  11. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  12. Facial emotion recognition, socio-occupational functioning and expressed emotions in schizophrenia versus bipolar disorder.

    PubMed

    Thonse, Umesh; Behere, Rishikesh V; Praharaj, Samir Kumar; Sharma, Podila Sathya Venkata Narasimha

    2018-06-01

    Facial emotion recognition deficits have been consistently demonstrated in patients with severe mental disorders. Expressed emotion is found to be an important predictor of relapse. However, the relationship between facial emotion recognition abilities and expressed emotions and its influence on socio-occupational functioning in schizophrenia versus bipolar disorder has not been studied. In this study we examined 91 patients with schizophrenia and 71 with bipolar disorder for psychopathology, socio occupational functioning and emotion recognition abilities. Primary caregivers of 62 patients with schizophrenia and 49 with bipolar disorder were assessed on Family Attitude Questionnaire to assess their expressed emotions. Patients of schizophrenia and bipolar disorder performed similarly on the emotion recognition task. Patients with schizophrenia group experienced higher critical comments and had a poorer socio-occupational functioning as compared to patients with bipolar disorder. Poorer socio-occupational functioning in patients with schizophrenia was significantly associated with greater dissatisfaction in their caregivers. In patients with bipolar disorder, poorer emotion recognition scores significantly correlated with poorer adaptive living skills and greater hostility and dissatisfaction in their caregivers. The findings of our study suggest that emotion recognition abilities in patients with bipolar disorder are associated with negative expressed emotions leading to problems in adaptive living skills. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  14. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.

    PubMed

    Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi

    2012-12-01

    We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  15. Reduced Recognition of Dynamic Facial Emotional Expressions and Emotion-Specific Response Bias in Children with an Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2015-01-01

    Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…

  16. Emotional facial recognition in proactive and reactive violent offenders.

    PubMed

    Philipp-Wiegmann, Florence; Rösler, Michael; Retz-Junginger, Petra; Retz, Wolfgang

    2017-10-01

    The purpose of this study is to analyse individual differences in the ability of emotional facial recognition in violent offenders, who were characterised as either reactive or proactive in relation to their offending. In accordance with findings of our previous study, we expected higher impairments in facial recognition in reactive than proactive violent offenders. To assess the ability to recognize facial expressions, the computer-based Facial Emotional Expression Labeling Test (FEEL) was performed. Group allocation of reactive und proactive violent offenders and assessment of psychopathic traits were performed by an independent forensic expert using rating scales (PROREA, PCL-SV). Compared to proactive violent offenders and controls, the performance of emotion recognition in the reactive offender group was significantly lower, both in total and especially in recognition of negative emotions such as anxiety (d = -1.29), sadness (d = -1.54), and disgust (d = -1.11). Furthermore, reactive violent offenders showed a tendency to interpret non-anger emotions as anger. In contrast, proactive violent offenders performed as well as controls. General and specific deficits in reactive violent offenders are in line with the results of our previous study and correspond to predictions of the Integrated Emotion System (IES, 7) and the hostile attribution processes (21). Due to the different error pattern in the FEEL test, the theoretical distinction between proactive and reactive aggression can be supported based on emotion recognition, even though aggression itself is always a heterogeneous act rather than a distinct one-dimensional concept.

  17. Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2009-12-01

    Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.

  18. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  19. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  20. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  1. Facial and prosodic emotion recognition in social anxiety disorder.

    PubMed

    Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei

    2017-07-01

    Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.

  2. Intelligent Facial Recognition Systems: Technology advancements for security applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g.,more » fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.« less

  3. Accuracy of computer-assisted navigation: significant augmentation by facial recognition software.

    PubMed

    Glicksman, Jordan T; Reger, Christine; Parasher, Arjun K; Kennedy, David W

    2017-09-01

    Over the past 20 years, image guidance navigation has been used with increasing frequency as an adjunct during sinus and skull base surgery. These devices commonly utilize surface registration, where varying pressure of the registration probe and loss of contact with the face during the skin tracing process can lead to registration inaccuracies, and the number of registration points incorporated is necessarily limited. The aim of this study was to evaluate the use of novel facial recognition software for image guidance registration. Consecutive adults undergoing endoscopic sinus surgery (ESS) were prospectively studied. Patients underwent image guidance registration via both conventional surface registration and facial recognition software. The accuracy of both registration processes were measured at the head of the middle turbinate (MTH), middle turbinate axilla (MTA), anterior wall of sphenoid sinus (SS), and nasal tip (NT). Forty-five patients were included in this investigation. Facial recognition was accurate to within a mean of 0.47 mm at the MTH, 0.33 mm at the MTA, 0.39 mm at the SS, and 0.36 mm at the NT. Facial recognition was more accurate than surface registration at the MTH by an average of 0.43 mm (p = 0.002), at the MTA by an average of 0.44 mm (p < 0.001), and at the SS by an average of 0.40 mm (p < 0.001). The integration of facial recognition software did not adversely affect registration time. In this prospective study, automated facial recognition software significantly improved the accuracy of image guidance registration when compared to conventional surface registration. © 2017 ARS-AAOA, LLC.

  4. Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.

    PubMed

    Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko

    2008-01-01

    The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.

  5. Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.

    PubMed

    Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu

    2013-01-01

    Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

  6. Stability of facial emotion recognition performance in bipolar disorder.

    PubMed

    Martino, Diego J; Samamé, Cecilia; Strejilevich, Sergio A

    2016-09-30

    The aim of this study was to assess the performance in emotional processing over time in a sample of euthymic patients with bipolar disorder (BD). Performance in the facial recognition of the six basic emotions (surprise, anger, sadness, happiness, disgust, and fear) did not change during a follow-up period of almost 7 years. These preliminary results suggest that performance in facial emotion recognition might be stable over time in BD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Executive cognitive functioning and the recognition of facial expressions of emotion in incarcerated violent offenders, non-violent offenders, and controls.

    PubMed

    Hoaken, Peter N S; Allaby, David B; Earle, Jeff

    2007-01-01

    Violence is a social problem that carries enormous costs; however, our understanding of its etiology is quite limited. A large body of research exists, which suggests a relationship between abnormalities of the frontal lobe and aggression; as a result, many researchers have implicated deficits in so-called "executive function" as an antecedent to aggressive behaviour. Another possibility is that violence may be related to problems interpreting facial expressions of emotion, a deficit associated with many forms of psychopathology, and an ability linked to the prefrontal cortex. The current study investigated performance on measures of executive function and on a facial-affect recognition task in 20 violent offenders, 20 non-violent offenders, and 20 controls. In support of our hypotheses, both offender groups performed significantly more poorly on measures of executive function relative to controls. In addition, violent offenders were significantly poorer on the facial-affect recognition task than either of the other two groups. Interestingly, scores on these measures were significantly correlated, with executive deficits associated with difficulties accurately interpreting facial affect. The implications of these results are discussed in terms of a broader understanding of violent behaviour. Copyright 2007 Wiley-Liss, Inc.

  8. Facial Recognition in a Group-Living Cichlid Fish.

    PubMed

    Kohda, Masanori; Jordan, Lyndon Alexander; Hotta, Takashi; Kosaka, Naoya; Karino, Kenji; Tanaka, Hirokazu; Taniyama, Masami; Takeyama, Tomohiro

    2015-01-01

    The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans.

  9. Sex differences in facial emotion recognition across varying expression intensity levels from videos.

    PubMed

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.

  10. Sex differences in facial emotion recognition across varying expression intensity levels from videos

    PubMed Central

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674

  11. Facial recognition in primary focal dystonia.

    PubMed

    Rinnerthaler, Martina; Benecke, Cord; Bartha, Lisa; Entner, Tanja; Poewe, Werner; Mueller, Joerg

    2006-01-01

    The basal ganglia seem to be involved in emotional processing. Primary dystonia is a movement disorder considered to result from basal ganglia dysfunction, and the aim of the present study was to investigate emotion recognition in patients with primary focal dystonia. Thirty-two patients with primary cranial (n=12) and cervical (n=20) dystonia were compared to 32 healthy controls matched for age, sex, and educational level on the facially expressed emotion labeling (FEEL) test, a computer-based tool measuring a person's ability to recognize facially expressed emotions. Patients with cognitive impairment or depression were excluded. None of the patients received medication with a possible cognitive side effect profile and only those with mild to moderate dystonia were included. Patients with primary dystonia showed isolated deficits in the recognition of disgust (P=0.007), while no differences between patients and controls were found with regard to the other emotions (fear, happiness, surprise, sadness, and anger). The findings of the present study add further evidence to the conception that dystonia is not only a motor but a complex basal ganglia disorder including selective emotion recognition disturbances. Copyright (c) 2005 Movement Disorder Society.

  12. Shy children are less sensitive to some cues to facial recognition.

    PubMed

    Brunet, Paul M; Mondloch, Catherine J; Schmidt, Louis A

    2010-02-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about pairs of faces that differed in the appearance of individual features, the shape of the external contour, or the spacing among features; their parent completed the Colorado childhood temperament inventory (CCTI). Children who scored higher on CCTI shyness made more errors than their non-shy counterparts only when discriminating faces based on the spacing of features. Differences in accuracy were not related to other scales of the CCTI. In Study 2, we showed that these differences were face-specific and cannot be attributed to differences in task difficulty. Findings suggest that shy children are less sensitive to some cues to facial recognition possibly underlying their inability to distinguish certain facial emotions in others, leading to a cascade of secondary negative effects in social behaviour.

  13. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  14. Recognition of computerized facial approximations by familiar assessors.

    PubMed

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  15. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models.

    PubMed

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner's faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals.

  16. Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A Comparative Study

    DTIC Science & Technology

    2006-01-01

    Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A Comparative Study ∗ Andrea Selinger† Diego A. Socolinsky‡ †Equinox...TYPE 3. DATES COVERED 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A

  17. Genetic variations in the dopamine system and facial expression recognition in healthy chinese college students.

    PubMed

    Zhu, Bi; Chen, Chuansheng; Moyzis, Robert K; Dong, Qi; Chen, Chunhui; He, Qinghua; Stern, Hal S; Li, He; Li, Jin; Li, Jun; Lessard, Jared; Lin, Chongde

    2012-01-01

    This study investigated the relation between genetic variations in the dopamine system and facial expression recognition. A sample of Chinese college students (n = 478) was given a facial expression recognition task. Subjects were genotyped for 98 loci [96 single-nucleotide polymorphisms (SNPs) and 2 variable number tandem repeats] in 16 genes involved in the dopamine neurotransmitter system, including its 4 subsystems: synthesis (TH, DDC, and DBH), degradation/transport (COMT,MAOA,MAOB, and SLC6A3), receptors (DRD1,DRD2,DRD3,DRD4, and DRD5), and modulation (NTS,NTSR1,NTSR2, and NLN). To quantify the total contributions of the dopamine system to emotion recognition, we used a series of multiple regression models. Permutation analyses were performed to assess the posterior probabilities of obtaining such results. Among the 78 loci that were included in the final analyses (after excluding 12 SNPs that were in high linkage disequilibrium and 8 that were not in Hardy-Weinberg equilibrium), 1 (for fear), 3 (for sadness), 5 (for anger), 13 (for surprise), and 15 (for disgust) loci exhibited main effects on the recognition of facial expressions. Genetic variations in the dopamine system accounted for 3% for fear, 6% for sadness, 7% for anger, 10% for surprise, and 18% for disgust, with the latter surviving a stringent permutation test. Genetic variations in the dopamine system (especially the dopamine synthesis and modulation subsystems) made significant contributions to individual differences in the recognition of disgust faces. Copyright © 2012 S. Karger AG, Basel.

  18. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  19. The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition

    ERIC Educational Resources Information Center

    Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.

    2006-01-01

    This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…

  20. The Chinese Facial Emotion Recognition Database (CFERD): a computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities.

    PubMed

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2012-12-30

    The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1

  1. Age, gender, and puberty influence the development of facial emotion recognition.

    PubMed

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  2. Age, gender, and puberty influence the development of facial emotion recognition

    PubMed Central

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers. PMID:26136697

  3. A Neural Basis of Facial Action Recognition in Humans

    PubMed Central

    Srinivasan, Ramprakash; Golomb, Julie D.

    2016-01-01

    By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and

  4. The effect of forced choice on facial emotion recognition: a comparison to open verbal classification of emotion labels

    PubMed Central

    Limbrecht-Ecklundt, Kerstin; Scheck, Andreas; Jerg-Bretzke, Lucia; Walter, Steffen; Hoffmann, Holger; Traue, Harald C.

    2013-01-01

    Objective: This article includes the examination of potential methodological problems of the application of a forced choice response format in facial emotion recognition. Methodology: 33 subjects were presented with validated facial stimuli. The task was to make a decision about which emotion was shown. In addition, the subjective certainty concerning the decision was recorded. Results: The detection rates are 68% for fear, 81% for sadness, 85% for anger, 87% for surprise, 88% for disgust, and 94% for happiness, and are thus well above the random probability. Conclusion: This study refutes the concern that the use of forced choice formats may not adequately reflect actual recognition performance. The use of standardized tests to examine emotion recognition ability leads to valid results and can be used in different contexts. For example, the images presented here appear suitable for diagnosing deficits in emotion recognition in the context of psychological disorders and for mapping treatment progress. PMID:23798981

  5. Dopamine and light: effects on facial emotion recognition.

    PubMed

    Cawley, Elizabeth; Tippler, Maria; Coupland, Nicholas J; Benkelfat, Chawki; Boivin, Diane B; Aan Het Rot, Marije; Leyton, Marco

    2017-09-01

    Bright light can affect mood states and social behaviours. Here, we tested potential interacting effects of light and dopamine on facial emotion recognition. Participants were 32 women with subsyndromal seasonal affective disorder tested in either a bright (3000 lux) or dim light (10 lux) environment. Each participant completed two test days, one following the ingestion of a phenylalanine/tyrosine-deficient mixture and one with a nutritionally balanced control mixture, both administered double blind in a randomised order. Approximately four hours post-ingestion participants completed a self-report measure of mood followed by a facial emotion recognition task. All testing took place between November and March when seasonal symptoms would be present. Following acute phenylalanine/tyrosine depletion (APTD), compared to the nutritionally balanced control mixture, participants in the dim light condition were more accurate at recognising sad faces, less likely to misclassify them, and faster at responding to them, effects that were independent of changes in mood. Effects of APTD on responses to sad faces in the bright light group were less consistent. There were no APTD effects on responses to other emotions, with one exception: a significant light × mixture interaction was seen for the reaction time to fear, but the pattern of effect was not predicted a priori or seen on other measures. Together, the results suggest that the processing of sad emotional stimuli might be greater when dopamine transmission is low. Bright light exposure, used for the treatment of both seasonal and non-seasonal mood disorders, might produce some of its benefits by preventing this effect.

  6. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    PubMed

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  7. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    PubMed Central

    Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho

    2013-01-01

    Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568

  8. The Relationships among Facial Emotion Recognition, Social Skills, and Quality of Life.

    ERIC Educational Resources Information Center

    Simon, Elliott W.; And Others

    1995-01-01

    Forty-six institutionalized adults with mild or moderate mental retardation were administered the Vineland Adaptive Behavior Scales (socialization domain), a subjective measure of quality of life, and a facial emotion recognition test. Facial emotion recognition, quality of life, and social skills appeared to be independent of one another. Facial…

  9. [Impact of facial emotional recognition alterations in Dementia of the Alzheimer type].

    PubMed

    Rubinstein, Wanda; Cossini, Florencia; Politis, Daniel

    2016-07-01

    Face recognition of basic emotions is independent of other deficits in dementia of the Alzheimer type. Among these deficits, there is disagreement about what emotions are more difficult to recognize. Our aim was to study the presence of alterations in the process of facial recognition of basic emotions, and to investigate if there were differences in the recognition of each type of emotion in Alzheimer's disease. With three tests of recognition of basic facial emotions we evaluated 29 patients who had been diagnosed with dementia of the Alzheimer type and 18 control subjects. Significant differences were obtained in tests of recognition of basic facial emotions and between each. Since the amygdala, one of the brain structures responsible for emotional reaction, is affected in the early stages of this disease, our findings become relevant to understand how this alteration of the process of emotional recognition impacts the difficulties these patients have in both interpersonal relations and behavioral disorders.

  10. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models

    PubMed Central

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner’s faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals. PMID:27191162

  11. In what sense 'familiar'? Examining experiential differences within pathologies of facial recognition.

    PubMed

    Young, Garry

    2009-09-01

    Explanations of Capgras delusion and prosopagnosia typically incorporate a dual-route approach to facial recognition in which a deficit in overt or covert processing in one condition is mirror-reversed in the other. Despite this double dissociation, experiences of either patient-group are often reported in the same way--as lacking a sense of familiarity toward familiar faces. In this paper, deficits in the facial processing of these patients are compared to other facial recognition pathologies, and their experiential characteristics mapped onto the dual-route model in order to provide a less ambiguous link between facial processing and experiential content. The paper concludes that the experiential states of Capgras delusion, prosopagnosia, and related facial pathologies are quite distinct, and that this descriptive distinctiveness finds explanatory equivalence at the level of anatomical and functional disruption within the face recognition system. The role of skin conductance response (SCR) as a measure of 'familiarity' is also clarified.

  12. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    ERIC Educational Resources Information Center

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  13. Facial recognition performance of female inmates as a result of sexual assault history.

    PubMed

    Islam-Zwart, Kayleen A; Heath, Nicole M; Vik, Peter W

    2005-06-01

    This study examined the effect of sexual assault history on facial recognition performance. Gender of facial stimuli and posttraumatic stress disorder (PTSD) symptoms also were expected to influence performance. Fifty-six female inmates completed an interview and the Wechsler Memory Scale-Third Edition Faces I and Faces II subtests (Wechsler, 1997). Women with a sexual assault exhibited better immediate and delayed facial recognition skills than those with no assault history. There were no differences in performance based on the gender of faces or PTSD diagnosis. Immediate facial recognition was correlated with report of PTSD symptoms. Findings provide greater insight into women's reactions to, and the uniqueness of, the trauma of sexual victimization.

  14. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    ERIC Educational Resources Information Center

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  15. Facial Affect Recognition in Violent and Nonviolent Antisocial Behavior Subtypes.

    PubMed

    Schönenberg, Michael; Mayer, Sarah Verena; Christian, Sandra; Louis, Katharina; Jusyte, Aiste

    2016-10-01

    Prior studies provide evidence for impaired recognition of distress cues in individuals exhibiting antisocial behavior. However, it remains unclear whether this deficit is generally associated with antisociality or may be specific to violent behavior only. To examine whether there are meaningful differences between the two behavioral dimensions rule-breaking and aggression, violent and nonviolent incarcerated offenders as well as control participants were presented with an animated face recognition task in which a video sequence of a neutral face changed into an expression of one of the six basic emotions. The participants were instructed to press a button as soon as they were able to identify the emotional expression, allowing for an assessment of the perceived emotion onset. Both aggressive and nonaggressive offenders demonstrated a delayed perception of primarily fearful facial cues as compared to controls. These results suggest the importance of targeting impaired emotional processing in both types of antisocial behavior.

  16. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  17. Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men.

    PubMed

    Hoffmann, Holger; Kessler, Henrik; Eppel, Tobias; Rukavina, Stefanie; Traue, Harald C

    2010-11-01

    Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Facial expressions recognition with an emotion expressive robotic head

    NASA Astrophysics Data System (ADS)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  19. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  20. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  1. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    PubMed

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  3. Children's Recognition of Emotional Facial Expressions Through Photographs and Drawings.

    PubMed

    Brechet, Claire

    2017-01-01

    The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.

  4. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    PubMed

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (P<0.05). The ASD group also had a significantly lower overall accuracy rate of facial expression recognition and significantly lower accuracy rates of the recognition of happy, angry, sad, and frightened expressions than the control group (P<0.05). In the ASD group, the overall accuracy rate of facial expression recognition and the accuracy rates of the recognition of happy and frightened expressions were positively correlated with PRI (r=0.415, 0.455, and 0.393 respectively; P<0.05). The accuracy rate of the recognition of angry expression was positively correlated with WMI (r=0.397; P<0.05). ASD children have delayed intelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  5. Facial affect recognition in symptomatically remitted patients with schizophrenia and bipolar disorder.

    PubMed

    Yalcin-Siedentopf, Nursen; Hoertnagl, Christine M; Biedermann, Falko; Baumgartner, Susanne; Deisenhammer, Eberhard A; Hausmann, Armand; Kaufmann, Alexandra; Kemmler, Georg; Mühlbacher, Moritz; Rauch, Anna-Sophia; Fleischhacker, W Wolfgang; Hofer, Alex

    2014-02-01

    Both schizophrenia and bipolar disorder (BD) have consistently been associated with deficits in facial affect recognition (FAR). These impairments have been related to various aspects of social competence and functioning and are relatively stable over time. However, individuals in remission may outperform patients experiencing an acute phase of the disorders. The present study directly contrasted FAR in symptomatically remitted patients with schizophrenia or BD and healthy volunteers and investigated its relationship with patients' outcomes. Compared to healthy control subjects, schizophrenia patients were impaired in the recognition of angry, disgusted, sad and happy facial expressions, while BD patients showed deficits only in the recognition of disgusted and happy facial expressions. When directly comparing the two patient groups individuals suffering from BD outperformed those with schizophrenia in the recognition of expressions depicting anger. There was no significant association between affect recognition abilities and symptomatic or psychosocial outcomes in schizophrenia patients. Among BD patients, relatively higher depression scores were associated with impairments in both the identification of happy faces and psychosocial functioning. Overall, our findings indicate that during periods of symptomatic remission the recognition of facial affect may be less impaired in patients with BD than in those suffering from schizophrenia. However, in the psychosocial context BD patients seem to be more sensitive to residual symptomatology. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  7. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  8. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Instructions to mimic improve facial emotion recognition in people with sub-clinical autism traits.

    PubMed

    Lewis, Michael B; Dunn, Emily

    2017-11-01

    People tend to mimic the facial expression of others. It has been suggested that this helps provide social glue between affiliated people but it could also aid recognition of emotions through embodied cognition. The degree of facial mimicry, however, varies between individuals and is limited in people with autism spectrum conditions (ASC). The present study sought to investigate the effect of promoting facial mimicry during a facial-emotion-recognition test. In two experiments, participants without an ASC diagnosis had their autism quotient (AQ) measured. Following a baseline test, they did an emotion-recognition test again but half of the participants were asked to mimic the target face they saw prior to making their responses. Mimicry improved emotion recognition, and further analysis revealed that the largest improvement was for participants who had higher scores on the autism traits. In fact, recognition performance was best overall for people who had high AQ scores but also received the instruction to mimic. Implications for people with ASC are explored.

  10. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Facial soft biometric features for forensic face recognition.

    PubMed

    Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier

    2015-12-01

    This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.

    PubMed

    Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C

    2017-11-01

    Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.

  13. Development of Emotional Facial Recognition in Late Childhood and Adolescence

    ERIC Educational Resources Information Center

    Thomas, Laura A.; De Bellis, Michael D.; Graham, Reiko; Labar, Kevin S.

    2007-01-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents…

  14. Fusiform gyrus volume reduction and facial recognition in chronic schizophrenia.

    PubMed

    Onitsuka, Toshiaki; Shenton, Martha E; Kasai, Kiyoto; Nestor, Paul G; Toner, Sarah K; Kikinis, Ron; Jolesz, Ferenc A; McCarley, Robert W

    2003-04-01

    The fusiform gyrus (FG), or occipitotemporal gyrus, is thought to subserve the processing and encoding of faces. Of note, several studies have reported that patients with schizophrenia show deficits in facial processing. It is thus hypothesized that the FG might be one brain region underlying abnormal facial recognition in schizophrenia. The objectives of this study were to determine whether there are abnormalities in gray matter volumes for the anterior and the posterior FG in patients with chronic schizophrenia and to investigate relationships between FG subregions and immediate and delayed memory for faces. Patients were recruited from the Boston VA Healthcare System, Brockton Division, and control subjects were recruited through newspaper advertisement. Study participants included 21 male patients diagnosed as having chronic schizophrenia and 28 male controls. Participants underwent high-spatial-resolution magnetic resonance imaging, and facial recognition memory was evaluated. Main outcome measures included anterior and posterior FG gray matter volumes based on high-spatial-resolution magnetic resonance imaging, a detailed and reliable manual delineation using 3-dimensional information, and correlation coefficients between FG subregions and raw scores on immediate and delayed facial memory derived from the Wechsler Memory Scale III. Patients with chronic schizophrenia had overall smaller FG gray matter volumes (10%) than normal controls. Additionally, patients with schizophrenia performed more poorly than normal controls in both immediate and delayed facial memory tests. Moreover, the degree of poor performance on delayed memory for faces was significantly correlated with the degree of bilateral anterior FG reduction in patients with schizophrenia. These results suggest that neuroanatomic FG abnormalities underlie at least some of the deficits associated with facial recognition in schizophrenia.

  15. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces - pain vigilance, pain-related catastrophizing, fear of pain--were assessed using self-report questionnaires. Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues.

  16. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  17. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    PubMed

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  18. Recognition of Facial Emotional Expression in Amnestic Mild Cognitive Impairment

    PubMed Central

    Varjassyová, Alexandra; Hořínek, Daniel; Andel, Ross; Amlerova, Jana; Laczó, Jan; Sheardová, Kateřina; Magerová, Hana; Holmerová, Iva; Vyhnálek, Martin; Bradáč, Ondřej; Geda, Yonas E.; Hort, Jakub

    2014-01-01

    We examined whether recognition of facial emotional expression would be affected in amnestic mild cognitive impairment (aMCI). A total of 50 elderly persons met the initial inclusion criteria, 10 were subsequently excluded (Geriatric Depression Score >5). 22 subjects were classified with aMCI based on published criteria (single domain aMCI [SD-aMCI], n = 10; multiple domain aMCI [MD-aMCI], n = 12); 18 subjects were cognitively normal. All underwent standard neurological and neuropsychological evaluations as well as tests of facial emotion recognition (FER) and famous faces identification (FFI). Among normal controls, FFI was negatively correlated with MMSE and positively correlated with executive function. Among patients with aMCI, FER was correlated with attention/speed of processing. No other correlations were significant. In a multinomial logistic regression model adjusted for age, sex, and education, a poorer score on FER, but not on FFI, was associated with greater odds of being classified as MD-aMCI (odds ratio [OR], 3.82; 95% confidence interval [CI], 1.05–13.91; p = 0.042). This association was not explained by memory or global cognitive score. There was no association between FER or FFI and SD-aMCI (OR, 1.13; 95% CI, 0.36–3.57; p = 0.836). Therefore, FER, but not FFI, may be impaired in MD-aMCI. This implies that in MD-aMCI, the tasks of FER and FFI may involve segregated neurocognitive networks. PMID:22954669

  19. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    PubMed

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. [Recognition of facial expression of emotions in Parkinson's disease: a theoretical review].

    PubMed

    Alonso-Recio, L; Serrano-Rodriguez, J M; Carvajal-Molina, F; Loeches-Alonso, A; Martin-Plasencia, P

    2012-04-16

    Emotional facial expression is a basic guide during social interaction and, therefore, alterations in their expression or recognition are important limitations for communication. To examine facial expression recognition abilities and their possible impairment in Parkinson's disease. First, we review the studies on this topic which have not found entirely similar results. Second, we analyze the factors that may explain these discrepancies and, in particular, as third objective, we consider the relationship between emotional recognition problems and cognitive impairment associated with the disease. Finally, we propose alternatives strategies for the development of studies that could clarify the state of these abilities in Parkinson's disease. Most studies suggest deficits in facial expression recognition, especially in those with negative emotional content. However, it is possible that these alterations are related to those that also appear in the course of the disease in other perceptual and executive processes. To advance in this issue, we consider necessary to design emotional recognition studies implicating differentially the executive or visuospatial processes, and/or contrasting cognitive abilities with facial expressions and non emotional stimuli. The precision of the status of these abilities, as well as increase our knowledge of the functional consequences of the characteristic brain damage in the disease, may indicate if we should pay special attention in their rehabilitation inside the programs implemented.

  1. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  2. Recognition of children on age-different images: Facial morphology and age-stable features.

    PubMed

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  3. Facial emotion expression recognition by children at familial risk for depression: High risk boys are oversensitive to sadness

    PubMed Central

    Lopez-Duran, Nestor L.; Kuhlman, Kate R.; George, Charles; Kovacs, Maria

    2012-01-01

    In the present study we examined perceptual sensitivity to facial expressions of sadness among children at familial-risk for depression (N = 64) and low-risk peers (N = 40) between the ages 7 and 13(Mage = 9.51; SD = 2.27). Participants were presented with pictures of facial expressions that varied in emotional intensity from neutral to full-intensity sadness or anger (i.e., emotion recognition), or pictures of faces morphing from anger to sadness (emotion discrimination). After each picture was presented, children indicated whether the face showed a specific emotion (i.e., sadness, anger) or no emotion at all (neutral). In the emotion recognition task, boys (but not girls) at familial-risk for depression identified sadness at significantly lower levels of emotional intensity than did their low-risk peers. The high and low-risk groups did not differ with regard to identification of anger. In the emotion discrimination task, both groups displayed over-identification of sadness in ambiguous mixed faces but high-risk youth were less likely to show this labeling bias than their peers. Our findings are consistent with the hypothesis that enhanced perceptual sensitivity to subtle traces of sadness in facial expressions may be a potential mechanism of risk among boys at familial-risk for depression. This enhanced perceptual sensitivity does not appear to be due to biases in the labeling of ambiguous faces. PMID:23106941

  4. [The effect of the serotonin transporter 5-HTTLPR polymorphism on the recognition of facial emotions in schizophrenia].

    PubMed

    Alfimova, M V; Golimbet, V E; Korovaitseva, G I; Lezheiko, T V; Abramova, L I; Aksenova, E V; Bolgov, M I

    2014-01-01

    The 5-HTTLPR SLC6A4 and catechol-o-methyltransferase (COMT) Val158Met polymorphisms are reported to be associated with processing of facial expressions in general population. Impaired recognition of facial expressions that is characteristic of schizophrenia negatively impacts on the social adaptation of the patients. To search for molecular mechanisms of this deficit, we studied main and epistatic effects of 5-HTTLPR and Val158Met polymorphisms on the facial emotion recognition in patients with schizophrenia (n=299) and healthy controls (n=232). The 5-HTTLPR polymorphism was associated with the emotion recognition in patients. The ll-homozygotes recognized facial emotions significantly better compared to those with an s-allele (F=8.00; p=0.005). Although the recognition of facial emotions was correlated with negative symptoms, verbal learning and trait anxiety, these variables did not significantly modified the association. In both groups, no effect of the COMT on the recognition of facial emotions was found.

  5. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  6. The Change in Facial Emotion Recognition Ability in Inpatients with Treatment Resistant Schizophrenia After Electroconvulsive Therapy.

    PubMed

    Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi

    2017-09-01

    People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p < 0.05) and no significant changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.

  7. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications.

    PubMed

    Parks, Connie L; Monson, Keith L

    2017-04-01

    The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.

  8. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed Central

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    BACKGROUND: It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. OBJECTIVES: Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. METHODS: Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces – pain vigilance, pain-related catastrophizing, fear of pain – were assessed using self-report questionnaires. RESULTS: Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. CONCLUSIONS: Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues. PMID:23717826

  10. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  11. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  12. Facial Emotion Recognition Impairments are Associated with Brain Volume Abnormalities in Individuals with HIV

    PubMed Central

    Clark, Uraina S.; Walker, Keenan A.; Cohen, Ronald A.; Devlin, Kathryn N.; Folkers, Anna M.; Pina, Mathew M.; Tashima, Karen T.

    2015-01-01

    Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV− associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities. PMID:25744868

  13. Ventrolateral prefrontal cortex and the effects of task demand context on facial affect appraisal in schizophrenia.

    PubMed

    Leitman, David I; Wolf, Daniel H; Loughead, James; Valdez, Jeffrey N; Kohler, Christian G; Brensinger, Colleen; Elliott, Mark A; Turetsky, Bruce I; Gur, Raquel E; Gur, Ruben C

    2011-01-01

    Schizophrenia patients display impaired performance and brain activity during facial affect recognition. These impairments may reflect stimulus-driven perceptual decrements and evaluative processing abnormalities. We differentiated these two processes by contrasting responses to identical stimuli presented under different contexts. Seventeen healthy controls and 16 schizophrenia patients performed an fMRI facial affect detection task. Subjects identified an affective target presented amongst foils of differing emotions. We hypothesized that targeting affiliative emotions (happiness, sadness) would create a task demand context distinct from that generated when targeting threat emotions (anger, fear). We compared affiliative foil stimuli within a congruent affiliative context with identical stimuli presented in an incongruent threat context. Threat foils were analysed in the same manner. Controls activated right orbitofrontal cortex (OFC)/ventrolateral prefrontal cortex (VLPFC) more to affiliative foils in threat contexts than to identical stimuli within affiliative contexts. Patients displayed reduced OFC/VLPFC activation to all foils, and no activation modulation by context. This lack of context modulation coincided with a 2-fold decrement in foil detection efficiency. Task demands produce contextual effects during facial affective processing in regions activated during affect evaluation. In schizophrenia, reduced modulation of OFC/VLPFC by context coupled with reduced behavioural efficiency suggests impaired ventral prefrontal control mechanisms that optimize affective appraisal.

  14. "Now I see it, now I don't": Determining Threshold Levels of Facial Emotion Recognition for Use in Patient Populations.

    PubMed

    Chiu, Isabelle; Gfrörer, Regina I; Piguet, Olivier; Berres, Manfred; Monsch, Andreas U; Sollberger, Marc

    2015-08-01

    The importance of including measures of emotion processing, such as tests of facial emotion recognition (FER), as part of a comprehensive neuropsychological assessment is being increasingly recognized. In clinical settings, FER tests need to be sensitive, short, and easy to administer, given the limited time available and patient limitations. Current tests, however, commonly use stimuli that either display prototypical emotions, bearing the risk of ceiling effects and unequal task difficulty, or are cognitively too demanding and time-consuming. To overcome these limitations in FER testing in patient populations, we aimed to define FER threshold levels for the six basic emotions in healthy individuals. Forty-nine healthy individuals between 52 and 79 years of age were asked to identify the six basic emotions at different intensity levels (25%, 50%, 75%, 100%, and 125% of the prototypical emotion). Analyses uncovered differing threshold levels across emotions and sex of facial stimuli, ranging from 50% up to 100% intensities. Using these findings as "healthy population benchmarks", we propose to apply these threshold levels to clinical populations either as facial emotion recognition or intensity rating tasks. As part of any comprehensive social cognition test battery, this approach should allow for a rapid and sensitive assessment of potential FER deficits.

  15. Effect of camera resolution and bandwidth on facial affect recognition.

    PubMed

    Cruz, Mario; Cruz, Robyn Flaum; Krupinski, Elizabeth A; Lopez, Ana Maria; McNeeley, Richard M; Weinstein, Ronald S

    2004-01-01

    This preliminary study explored the effect of camera resolution and bandwidth on facial affect recognition, an important process and clinical variable in mental health service delivery. Sixty medical students and mental health-care professionals were recruited and randomized to four different combinations of commonly used teleconferencing camera resolutions and bandwidths: (1) one chip charged coupling device (CCD) camera, commonly used for VHSgrade taping and in teleconferencing systems costing less than $4,000 with a resolution of 280 lines, and 128 kilobytes per second bandwidth (kbps); (2) VHS and 768 kbps; (3) three-chip CCD camera, commonly used for Betacam (Beta) grade taping and in teleconferencing systems costing more than $4,000 with a resolution of 480 lines, and 128 kbps; and (4) Betacam and 768 kbps. The subjects were asked to identify four facial affects dynamically presented on videotape by an actor and actress presented via a video monitor at 30 frames per second. Two-way analysis of variance (ANOVA) revealed a significant interaction effect for camera resolution and bandwidth (p = 0.02) and a significant main effect for camera resolution (p = 0.006), but no main effect for bandwidth was detected. Post hoc testing of interaction means, using the Tukey Honestly Significant Difference (HSD) test and the critical difference (CD) at the 0.05 alpha level = 1.71, revealed subjects in the VHS/768 kbps (M = 7.133) and VHS/128 kbps (M = 6.533) were significantly better at recognizing the displayed facial affects than those in the Betacam/768 kbps (M = 4.733) or Betacam/128 kbps (M = 6.333) conditions. Camera resolution and bandwidth combinations differ in their capacity to influence facial affect recognition. For service providers, this study's results support the use of VHS cameras with either 768 kbps or 128 kbps bandwidths for facial affect recognition compared to Betacam cameras. The authors argue that the results of this study are a consequence of the

  16. Multimedia Content Development as a Facial Expression Datasets for Recognition of Human Emotions

    NASA Astrophysics Data System (ADS)

    Mamonto, N. E.; Maulana, H.; Liliana, D. Y.; Basaruddin, T.

    2018-02-01

    Datasets that have been developed before contain facial expression from foreign people. The development of multimedia content aims to answer the problems experienced by the research team and other researchers who will conduct similar research. The method used in the development of multimedia content as facial expression datasets for human emotion recognition is the Villamil-Molina version of the multimedia development method. Multimedia content developed with 10 subjects or talents with each talent performing 3 shots with each capturing talent having to demonstrate 19 facial expressions. After the process of editing and rendering, tests are carried out with the conclusion that the multimedia content can be used as a facial expression dataset for recognition of human emotions.

  17. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: a randomised, double-blind, placebo-controlled study in cannabis users.

    PubMed

    Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie

    2015-03-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: A randomised, double-blind, placebo-controlled study in cannabis users

    PubMed Central

    Hindocha, Chandni; Freeman, Tom P.; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K.; Morgan, Celia J.A.; Curran, H. Valerie

    2015-01-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8 mg), CBD (16 mg), THC+CBD (8 mg+16 mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling ‘stoned’ was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being ‘stoned’. CBD did not influence feelings of ‘stoned’. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. PMID:25534187

  19. Facial emotion recognition in Williams syndrome and Down syndrome: A matching and developmental study.

    PubMed

    Martínez-Castilla, Pastora; Burt, Michael; Borgatti, Renato; Gagliardi, Chiara

    2015-01-01

    In this study both the matching and developmental trajectories approaches were used to clarify questions that remain open in the literature on facial emotion recognition in Williams syndrome (WS) and Down syndrome (DS). The matching approach showed that individuals with WS or DS exhibit neither proficiency for the expression of happiness nor specific impairments for negative emotions. Instead, they present the same pattern of emotion recognition as typically developing (TD) individuals. Thus, the better performance on the recognition of positive compared to negative emotions usually reported in WS and DS is not specific of these populations but seems to represent a typical pattern. Prior studies based on the matching approach suggested that the development of facial emotion recognition is delayed in WS and atypical in DS. Nevertheless, and even though performance levels were lower in DS than in WS, the developmental trajectories approach used in this study evidenced that not only individuals with DS but also those with WS present atypical development in facial emotion recognition. Unlike in the TD participants, where developmental changes were observed along with age, in the WS and DS groups, the development of facial emotion recognition was static. Both individuals with WS and those with DS reached an early maximum developmental level due to cognitive constraints.

  20. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  1. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  2. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  3. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  4. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    PubMed

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  5. Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.

    PubMed

    Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A

    2017-12-01

    To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.

  6. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  7. CACNA1C risk variant affects facial emotion recognition in healthy individuals.

    PubMed

    Nieratschker, Vanessa; Brückmann, Christof; Plewnia, Christian

    2015-11-27

    Recognition and correct interpretation of facial emotion is essential for social interaction and communication. Previous studies have shown that impairments in this cognitive domain are common features of several psychiatric disorders. Recent association studies identified CACNA1C as one of the most promising genetic risk factors for psychiatric disorders and previous evidence suggests that the most replicated risk variant in CACNA1C (rs1006737) is affecting emotion recognition and processing. However, studies investigating the influence of rs1006737 on this intermediate phenotype in healthy subjects at the behavioral level are largely missing to date. Here, we applied the "Reading the Mind in the Eyes" test, a facial emotion recognition paradigm in a cohort of 92 healthy individuals to address this question. Whereas accuracy was not affected by genotype, CACNA1C rs1006737 risk-allele carries (AA/AG) showed significantly slower mean response times compared to individuals homozygous for the G-allele, indicating that healthy risk-allele carriers require more information to correctly identify a facial emotion. Our study is the first to provide evidence for an impairing behavioral effect of the CACNA1C risk variant rs1006737 on facial emotion recognition in healthy individuals and adds to the growing number of studies pointing towards CACNA1C as affecting intermediate phenotypes of psychiatric disorders.

  8. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study.

    PubMed

    Nishikawa, Saori; Toshima, Tamotsu; Kobayashi, Masao

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G × E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  9. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study

    PubMed Central

    Nishikawa, Saori

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G×E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links. PMID:26418317

  10. Facial emotion recognition deficits in relatives of children with autism are not associated with 5HTTLPR.

    PubMed

    Neves, Maila de Castro Lourenço das; Tremeau, Fabien; Nicolato, Rodrigo; Lauar, Hélio; Romano-Silva, Marco Aurélio; Correa, Humberto

    2011-09-01

    A large body of evidence suggests that several aspects of face processing are impaired in autism and that this impairment might be hereditary. This study was aimed at assessing facial emotion recognition in parents of children with autism and its associations with a functional polymorphism of the serotonin transporter (5HTTLPR). We evaluated 40 parents of children with autism and 41 healthy controls. All participants were administered the Penn Emotion Recognition Test (ER40) and were genotyped for 5HTTLPR. Our study showed that parents of children with autism performed worse in the facial emotion recognition test than controls. Analyses of error patterns showed that parents of children with autism over-attributed neutral to emotional faces. We found evidence that 5HTTLPR polymorphism did not influence the performance in the Penn Emotion Recognition Test, but that it may determine different error patterns. Facial emotion recognition deficits are more common in first-degree relatives of autistic patients than in the general population, suggesting that facial emotion recognition is a candidate endophenotype for autism.

  11. Modulation of α power and functional connectivity during facial affect recognition.

    PubMed

    Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan

    2013-04-03

    Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.

  12. The Boundaries of Hemispheric Processing in Visual Pattern Recognition

    DTIC Science & Technology

    1989-11-01

    Allen, M. W. (1968). Impairment in facial recognition in patients cerebral disease. Cortex, 4, 344-358. Bogen, J. E. (1969). The other side of the brain...effects on a facial recognition task in normal subjects. Cortex, 9, 246-258. tliscock, M. (1988). Behavioral asymmetries in normal children. In D. L... facial recognition . Neuropsychologia, 22, 471-477. Ross-Kossak, P., & Turkewitz, G. (1986). A micro and macro developmental view of the nature of changes

  13. Deficits in Facial Emotion Recognition in Schizophrenia: A Replication Study with Korean Subjects

    PubMed Central

    Lee, Seung Jae; Lee, Hae-Kook; Kweon, Yong-Sil; Lee, Chung Tai

    2010-01-01

    Objective We investigated the deficit in the recognition of facial emotions in a sample of medicated, stable Korean patients with schizophrenia using Korean facial emotion pictures and examined whether the possible impairments would corroborate previous findings. Methods Fifty-five patients with schizophrenia and 62 healthy control subjects completed the Facial Affect Identification Test with a new set of 44 colored photographs of Korean faces including the six universal emotions as well as neutral faces. Results Korean patients with schizophrenia showed impairments in the recognition of sad, fearful, and angry faces [F(1,114)=6.26, p=0.014; F(1,114)=6.18, p=0.014; F(1,114)=9.28, p=0.003, respectively], but their accuracy was no different from that of controls in the recognition of happy emotions. Higher total and three subscale scores of the Positive and Negative Syndrome Scale (PANSS) correlated with worse performance on both angry and neutral faces. Correct responses on happy stimuli were negatively correlated with negative symptom scores of the PANSS. Patients with schizophrenia also exhibited different patterns of misidentification relative to normal controls. Conclusion These findings were consistent with previous studies carried out with different ethnic groups, suggesting cross-cultural similarities in facial recognition impairment in schizophrenia. PMID:21253414

  14. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    PubMed

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  15. Constructive autoassociative neural network for facial recognition.

    PubMed

    Fernandes, Bruno J T; Cavalcanti, George D C; Ren, Tsang I

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  16. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    PubMed

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  17. Individual differences in the recognition of facial expressions: an event-related potentials study.

    PubMed

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  18. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    PubMed

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  20. Violent media consumption and the recognition of dynamic facial expressions.

    PubMed

    Kirsh, Steven J; Mounts, Jeffrey R W; Olczak, Paul V

    2006-05-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph. Results indicated that, independent of trait aggressiveness, participants high in violent media consumption responded slower to depictions of happiness and faster to depictions of anger than participants low in violent media consumption. Implications of these findings are discussed with respect to current models of aggressive behavior.

  1. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  2. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition

  3. Investigation of facial emotion recognition, alexithymia, and levels of anxiety and depression in patients with somatic symptoms and related disorders

    PubMed Central

    Öztürk, Ahmet; Kiliç, Alperen; Deveci, Erdem; Kirpinar, İsmet

    2016-01-01

    Background The concept of facial emotion recognition is well established in various neuropsychiatric disorders. Although emotional disturbances are strongly associated with somatoform disorders, there are a restricted number of studies that have investigated facial emotion recognition in somatoform disorders. Furthermore, there have been no studies that have regarded this issue using the new diagnostic criteria for somatoform disorders as somatic symptoms and related disorders (SSD). In this study, we aimed to compare the factors of facial emotion recognition between patients with SSD and age- and sex-matched healthy controls (HC) and to retest and investigate the factors of facial emotion recognition using the new criteria for SSD. Patients and methods After applying the inclusion and exclusion criteria, 54 patients who were diagnosed with SSD according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria and 46 age- and sex-matched HC were selected to participate in the present study. Facial emotion recognition, alexithymia, and the status of anxiety and depression were compared between the groups. Results Patients with SSD had significantly decreased scores of facial emotion for fear faces, disgust faces, and neutral faces compared with age- and sex-matched HC (t=−2.88, P=0.005; t=−2.86, P=0.005; and t=−2.56, P=0.009, respectively). After eliminating the effects of alexithymia and depressive and anxious states, the groups were found to be similar in terms of their responses to facial emotion and mean reaction time to facial emotions. Discussion Although there have been limited numbers of studies that have examined the recognition of facial emotion in patients with somatoform disorders, our study is the first to investigate facial recognition in patients with SSD diagnosed according to the DSM-5 criteria. Recognition of facial emotion was found to be disturbed in patients with SSD. However, our findings suggest that

  4. Developing a Natural User Interface and Facial Recognition System With OpenCV and the Microsoft Kinect

    NASA Technical Reports Server (NTRS)

    Gutensohn, Michael

    2018-01-01

    The task for this project was to design, develop, test, and deploy a facial recognition system for the Kennedy Space Center Augmented/Virtual Reality Lab. This system will serve as a means of user authentication as part of the NUI of the lab. The overarching goal is to create a seamless user interface that will allow the user to initiate and interact with AR and VR experiences without ever needing to use a mouse or keyboard at any step in the process.

  5. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    PubMed

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p < .01), (3) males but not females evaluated the images with a relative bias towards the cooperator category (p < .01), and (4) females were more confident in detecting defectors (p < .05). According to facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face. © 2012 The British Psychological Society.

  6. Age Deficits in Facial Affect Recognition: The Influence of Dynamic Cues.

    PubMed

    Grainger, Sarah A; Henry, Julie D; Phillips, Louise H; Vanman, Eric J; Allen, Roy

    2017-07-01

    Older adults have difficulties in identifying most facial expressions of emotion. However, most aging studies have presented static photographs of intense expressions, whereas in everyday experience people see emotions that develop and change. The present study was designed to assess whether age-related difficulties with emotion recognition are reduced when more ecologically valid (i.e., dynamic) stimuli are used. We examined the effect of stimuli format (i.e., static vs. dynamic) on facial affect recognition in two separate studies that included independent samples and distinct stimuli sets. In addition to younger and older participants, a middle-aged group was included in Study 1 and eye gaze patterns were assessed in Study 2. Across both studies, older adults performed worse than younger adults on measures of facial affect recognition. In Study 1, older and-middle aged adults benefited from dynamic stimuli, but only when the emotional displays were subtle. Younger adults gazed more at the eye region of the face relative to older adults (Study 2), but dynamic presentation increased attention towards the eye region for younger adults only. Together, these studies provide important and novel insights into the specific circumstances in which older adults may be expected to experience difficulties in perceiving facial emotions. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    PubMed Central

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  8. The beneficial effect of oxytocin on avoidance-related facial emotion recognition depends on early life stress experience.

    PubMed

    Feeser, Melanie; Fan, Yan; Weigand, Anne; Hahn, Adam; Gärtner, Matti; Aust, Sabine; Böker, Heinz; Bajbouj, Malek; Grimm, Simone

    2014-12-01

    Previous studies have shown that oxytocin (OXT) enhances social cognitive processes. It has also been demonstrated that OXT does not uniformly facilitate social cognition. The effects of OXT administration strongly depend on the exposure to stressful experiences in early life. Emotional facial recognition is crucial for social cognition. However, no study has yet examined how the effects of OXT on the ability to identify emotional faces are altered by early life stress (ELS) experiences. Given the role of OXT in modulating social motivational processes, we specifically aimed to investigate its effects on the recognition of approach- and avoidance-related facial emotions. In a double-blind, between-subjects, placebo-controlled design, 82 male participants performed an emotion recognition task with faces taken from the "Karolinska Directed Emotional Faces" set. We clustered the six basic emotions along the dimensions approach (happy, surprise, anger) and avoidance (fear, sadness, disgust). ELS was assessed with the Childhood Trauma Questionnaire (CTQ). Our results showed that OXT improved the ability to recognize avoidance-related emotional faces as compared to approach-related emotional faces. Whereas the performance for avoidance-related emotions in participants with higher ELS scores was comparable in both OXT and placebo condition, OXT enhanced emotion recognition in participants with lower ELS scores. Independent of OXT administration, we observed increased emotion recognition for avoidance-related faces in participants with high ELS scores. Our findings suggest that the investigation of OXT on social recognition requires a broad approach that takes ELS experiences as well as motivational processes into account.

  9. In-the-wild facial expression recognition in extreme poses

    NASA Astrophysics Data System (ADS)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  10. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    PubMed Central

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro

    2012-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547

  12. Lying about facial recognition: an fMRI study.

    PubMed

    Bhatt, S; Mbwana, J; Adeyemo, A; Sawyer, A; Hailu, A; Vanmeter, J

    2009-03-01

    Novel deception detection techniques have been in creation for centuries. Functional magnetic resonance imaging (fMRI) is a neuroscience technology that non-invasively measures brain activity associated with behavior and cognition. A number of investigators have explored the utilization and efficiency of fMRI in deception detection. In this study, 18 subjects were instructed during an fMRI "line-up" task to either conceal (lie) or reveal (truth) the identities of individuals seen in study sets in order to determine the neural correlates of intentionally misidentifying previously known faces (lying about recognition). A repeated measures ANOVA (lie vs. truth and familiar vs. unfamiliar) and two paired t-tests (familiar vs. unfamiliar and familiar lie vs. familiar truth) revealed areas of activation associated with deception in the right MGF, red nucleus, IFG, SMG, SFG (with ACC), DLPFC, and bilateral precuneus. The areas activated in the present study may be involved in the suppression of truth, working and visuospatial memories, and imagery when providing misleading (deceptive) responses to facial identification prompts in the form of a "line-up".

  13. Impairment of facial recognition in patients with right cerebral infarcts quantified by computer aided "morphing".

    PubMed Central

    Rösler, A; Lanquillon, S; Dippel, O; Braune, H J

    1997-01-01

    OBJECTIVE: To investigate where facial recognition is located anatomically and to establish whether there is a graded transition from unimpaired recognition of faces to complete prosopagnosia after infarctions in the territory of the middle cerebral artery. METHODS: A computerised morphing program was developed which shows 30 frames gradually changing from portrait photographs of unfamiliar persons to those of well known persons. With a standardised protocol, 31 patients with right and left sided infarctions in the territory of the middle cerebral artery and an age and sex matched control group were compared by non-parametric tests. RESULTS AND CONCLUSION: Facial recognition in patients with right sided lesions was significantly impaired compared with controls and with patients with left sided lesions. A graded impairment in facial recognition in patients with right sided ischaemic infarcts in the territory of the middle cerebral artery seems to exist. Images PMID:9069481

  14. Facial emotion recognition and sleep in mentally disordered patients: A natural experiment in a high security hospital.

    PubMed

    Chu, Simon; McNeill, Kimberley; Ireland, Jane L; Qurashi, Inti

    2015-12-15

    We investigated the relationship between a change in sleep quality and facial emotion recognition accuracy in a group of mentally-disordered inpatients at a secure forensic psychiatric unit. Patients whose sleep improved over time also showed improved facial emotion recognition while patients who showed no sleep improvement showed no change in emotion recognition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Facial emotion recognition deficits following moderate-severe Traumatic Brain Injury (TBI): re-examining the valence effect and the role of emotion intensity.

    PubMed

    Rosenberg, Hannah; McDonald, Skye; Dethier, Marie; Kessels, Roy P C; Westbrook, R Frederick

    2014-11-01

    Many individuals who sustain moderate-severe traumatic brain injuries (TBI) are poor at recognizing emotional expressions, with a greater impairment in recognizing negative (e.g., fear, disgust, sadness, and anger) than positive emotions (e.g., happiness and surprise). It has been questioned whether this "valence effect" might be an artifact of the wide use of static facial emotion stimuli (usually full-blown expressions) which differ in difficulty rather than a real consequence of brain impairment. This study aimed to investigate the valence effect in TBI, while examining emotion recognition across different intensities (low, medium, and high). Twenty-seven individuals with TBI and 28 matched control participants were tested on the Emotion Recognition Task (ERT). The TBI group was more impaired in overall emotion recognition, and less accurate recognizing negative emotions. However, examining the performance across the different intensities indicated that this difference was driven by some emotions (e.g., happiness) being much easier to recognize than others (e.g., fear and surprise). Our findings indicate that individuals with TBI have an overall deficit in facial emotion recognition, and that both people with TBI and control participants found some emotions more difficult than others. These results suggest that conventional measures of facial affect recognition that do not examine variance in the difficulty of emotions may produce erroneous conclusions about differential impairment. They also cast doubt on the notion that dissociable neural pathways underlie the recognition of positive and negative emotions, which are differentially affected by TBI and potentially other neurological or psychiatric disorders.

  16. The effect of comorbid depression on facial and prosody emotion recognition in first-episode schizophrenia spectrum.

    PubMed

    Herniman, Sarah E; Allott, Kelly A; Killackey, Eóin; Hester, Robert; Cotton, Sue M

    2017-01-15

    Comorbid depression is common in first-episode schizophrenia spectrum (FES) disorders. Both depression and FES are associated with significant deficits in facial and prosody emotion recognition performance. However, it remains unclear whether people with FES and comorbid depression, compared to those without comorbid depression, have overall poorer emotion recognition, or instead, a different pattern of emotion recognition deficits. The aim of this study was to compare facial and prosody emotion recognition performance between those with and without comorbid depression in FES. This study involved secondary analysis of baseline data from a randomized controlled trial of vocational intervention for young people with first-episode psychosis (N=82; age range: 15-25 years). Those with comorbid depression (n=24) had more accurate recognition of sadness in faces compared to those without comorbid depression. Severity of depressive symptoms was also associated with more accurate recognition of sadness in faces. Such results did not recur for prosody emotion recognition. In addition to the cross-sectional design, limitations of this study include the absence of facial and prosodic recognition of neutral emotions. Findings indicate a mood congruent negative bias in facial emotion recognition in those with comorbid depression and FES, and provide support for cognitive theories of depression that emphasise the role of such biases in the development and maintenance of depression. Longitudinal research is needed to determine whether mood-congruent negative biases are implicated in the development and maintenance of depression in FES, or whether such biases are simply markers of depressed state. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    PubMed Central

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque

    2018-01-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845

  18. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    PubMed

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  19. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  20. Differences in Facial Emotion Recognition between First Episode Psychosis, Borderline Personality Disorder and Healthy Controls

    PubMed Central

    Gonzalez de Artaza, Maider; Bustamante, Sonia; Orgaz, Pablo; Osa, Luis; Angosto, Virxinia; Valverde, Cristina; Bilbao, Amaia; Madrazo, Arantza; van Os, Jim; Gonzalez-Torres, Miguel Angel

    2016-01-01

    Background Facial emotion recognition (FER) is essential to guide social functioning and behaviour for interpersonal communication. FER may be altered in severe mental illness such as in psychosis and in borderline personality disorder patients. However, it is unclear if these FER alterations are specifically related to psychosis. Awareness of FER alterations may be useful in clinical settings to improve treatment strategies. The aim of our study was to examine FER in patients with severe mental disorder and their relation with psychotic symptomatology. Materials and Methods Socio-demographic and clinical variables were collected. Alterations on emotion recognition were assessed in 3 groups: patients with first episode psychosis (FEP) (n = 64), borderline personality patients (BPD) (n = 37) and healthy controls (n = 137), using the Degraded Facial Affect Recognition Task. The Positive and Negative Syndrome Scale, Structured Interview for Schizotypy Revised and Community Assessment of Psychic Experiences scales were used to assess positive psychotic symptoms. WAIS III subtests were used to assess IQ. Results Kruskal-Wallis analysis showed a significant difference between groups on the FER of neutral faces score between FEP, BPD patients and controls and between FEP patients and controls in angry face recognition. No significant differences were found between groups in the fear or happy conditions. There was a significant difference between groups in the attribution of negative emotion to happy faces. BPD and FEP groups had a much higher tendency to recognize happy faces as negatives. There was no association with the different symptom domains in either group. Conclusions FEP and BPD patients have problems in recognizing neutral faces more frequently than controls. Moreover, patients tend to over-report negative emotions in recognition of happy faces. Although no relation between psychotic symptoms and FER alterations was found, these deficits could contribute to a

  1. Differences in Facial Emotion Recognition between First Episode Psychosis, Borderline Personality Disorder and Healthy Controls.

    PubMed

    Catalan, Ana; Gonzalez de Artaza, Maider; Bustamante, Sonia; Orgaz, Pablo; Osa, Luis; Angosto, Virxinia; Valverde, Cristina; Bilbao, Amaia; Madrazo, Arantza; van Os, Jim; Gonzalez-Torres, Miguel Angel

    2016-01-01

    Facial emotion recognition (FER) is essential to guide social functioning and behaviour for interpersonal communication. FER may be altered in severe mental illness such as in psychosis and in borderline personality disorder patients. However, it is unclear if these FER alterations are specifically related to psychosis. Awareness of FER alterations may be useful in clinical settings to improve treatment strategies. The aim of our study was to examine FER in patients with severe mental disorder and their relation with psychotic symptomatology. Socio-demographic and clinical variables were collected. Alterations on emotion recognition were assessed in 3 groups: patients with first episode psychosis (FEP) (n = 64), borderline personality patients (BPD) (n = 37) and healthy controls (n = 137), using the Degraded Facial Affect Recognition Task. The Positive and Negative Syndrome Scale, Structured Interview for Schizotypy Revised and Community Assessment of Psychic Experiences scales were used to assess positive psychotic symptoms. WAIS III subtests were used to assess IQ. Kruskal-Wallis analysis showed a significant difference between groups on the FER of neutral faces score between FEP, BPD patients and controls and between FEP patients and controls in angry face recognition. No significant differences were found between groups in the fear or happy conditions. There was a significant difference between groups in the attribution of negative emotion to happy faces. BPD and FEP groups had a much higher tendency to recognize happy faces as negatives. There was no association with the different symptom domains in either group. FEP and BPD patients have problems in recognizing neutral faces more frequently than controls. Moreover, patients tend to over-report negative emotions in recognition of happy faces. Although no relation between psychotic symptoms and FER alterations was found, these deficits could contribute to a patient's misinterpretations in daily life.

  2. Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance.

    PubMed

    Stahl, Johanna; Wiese, Holger; Schweinberger, Stefan R

    2010-06-01

    People are generally better in recognizing faces from their own ethnic group as opposed to faces from another ethnic group, a finding which has been interpreted in the context of two opposing theories. Whereas perceptual expertise theories stress the role of long-term experience with one's own ethnic group, race feature theories assume that the processing of an other-race-defining feature triggers inferior coding and recognition of faces. The present study tested these hypotheses by manipulating the learning task in a recognition memory test. At learning, one group of participants categorized faces according to ethnicity, whereas another group rated facial attractiveness. Subsequent recognition tests indicated clear and similar own-race biases for both groups. However, ERPs from learning and test phases demonstrated an influence of learning task on neurophysiological processing of own- and other-race faces. While both groups exhibited larger N170 responses to Asian as compared to Caucasian faces, task-dependent differences were seen in a subsequent P2 ERP component. Whereas the P2 was more pronounced for Caucasian faces in the categorization group, this difference was absent in the attractiveness rating group. The learning task thus influences early face encoding. Moreover, comparison with recent research suggests that this attractiveness rating task influences the processes reflected in the P2 in a similar manner as perceptual expertise for other-race faces does. By contrast, the behavioural own-race bias suggests that long-term expertise is required to increase other-race face recognition and hence attenuate the own-race bias. Copyright 2010 Elsevier Ltd. All rights reserved.

  3. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    PubMed Central

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  4. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  5. Are there differential deficits in facial emotion recognition between paranoid and non-paranoid schizophrenia? A signal detection analysis.

    PubMed

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2013-10-30

    This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Developmental prosopagnosia and the Benton Facial Recognition Test.

    PubMed

    Duchaine, Bradley C; Nakayama, Ken

    2004-04-13

    The Benton Facial Recognition Test is used for clinical and research purposes, but evidence suggests that it is possible to pass the test with impaired face discrimination abilities. The authors tested 11 patients with developmental prosopagnosia using this test, and a majority scored in the normal range. Consequently, scores in the normal range should be interpreted cautiously, and testing should always be supplemented by other face tests.

  7. Pervasive influence of idiosyncratic associative biases during facial emotion recognition.

    PubMed

    El Zein, Marwa; Wyart, Valentin; Grèzes, Julie

    2018-06-11

    Facial morphology has been shown to influence perceptual judgments of emotion in a way that is shared across human observers. Here we demonstrate that these shared associations between facial morphology and emotion coexist with strong variations unique to each human observer. Interestingly, a large part of these idiosyncratic associations does not vary on short time scales, emerging from stable inter-individual differences in the way facial morphological features influence emotion recognition. Computational modelling of decision-making and neural recordings of electrical brain activity revealed that both shared and idiosyncratic face-emotion associations operate through a common biasing mechanism rather than an increased sensitivity to face-associated emotions. Together, these findings emphasize the underestimated influence of idiosyncrasies on core social judgments and identify their neuro-computational signatures.

  8. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    ERIC Educational Resources Information Center

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  9. Facial recognition: a cognitive study of elderly dementia patients and normal older adults.

    PubMed

    Zandi, T; Cooper, M; Garrison, L

    1992-01-01

    Dementia patients' and normal elderlies' recognition of familiar, ordinary emotional and facial expressions was tested. In three conditions subjects were required to name the emotions depicted in pictures and to produce them while presented with the verbal labels of the expressions. The dementia patients' best performance occurred when they had access to the verbal labels while viewing the pictures. The major deficiency in facial recognition was found to be dysnomia related. Findings of this study suggest that the connection between the gnostic units of expression and the gnostic units of verbal labeling is not impaired significantly among the dementia patients.

  10. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  11. A small-world network model of facial emotion recognition.

    PubMed

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  12. Relationship between individual differences in functional connectivity and facial-emotion recognition abilities in adults with traumatic brain injury.

    PubMed

    Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C

    2017-01-01

    Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.

  13. Facial affect recognition deficit as a marker of genetic vulnerability to schizophrenia.

    PubMed

    Alfimova, Margarita V; Abramova, Lilia I; Barhatova, Aleksandra I; Yumatova, Polina E; Lyachenko, Galina L; Golimbet, Vera E

    2009-05-01

    The aim of this study was to investigate the possibility that affect recognition impairments are associated with genetic liability to schizophrenia. In a group of 55 unaffected relatives of schizophrenia patients (parents and siblings) we examined the capacity to detect facially expressed emotions and its relationship to schizotypal personality, neurocognitive functioning, and the subject's actual emotional state. The relatives were compared with 103 schizophrenia patients and 99 healthy subjects without any family history of psychoses. Emotional stimuli were nine black-and-white photos of actors, who portrayed six basic emotions as well as interest, contempt, and shame. The results evidenced the affect recognition deficit in relatives, though milder than that in patients themselves. No correlation between the deficit and schizotypal personality measured with SPQ was detected in the group of relatives. Neither cognitive functioning, including attention, verbal memory and linguistic ability, nor actual emotional states accounted for their affect recognition impairments. The results suggest that the facial affect recognition deficit in schizophrenia may be related to genetic predisposition to the disorder and may serve as an endophenotype in molecular-genetic studies.

  14. Bidirectional communication between amygdala and fusiform gyrus during facial recognition.

    PubMed

    Herrington, John D; Taylor, James M; Grupe, Daniel W; Curby, Kim M; Schultz, Robert T

    2011-06-15

    Decades of research have documented the specialization of fusiform gyrus (FG) for facial information processes. Recent theories indicate that FG activity is shaped by input from amygdala, but effective connectivity from amygdala to FG remains undocumented. In this fMRI study, 39 participants completed a face recognition task. 11 participants underwent the same experiment approximately four months later. Robust face-selective activation of FG, amygdala, and lateral occipital cortex were observed. Dynamic causal modeling and Bayesian Model Selection (BMS) were used to test the intrinsic connections between these structures, and their modulation by face perception. BMS results strongly favored a dynamic causal model with bidirectional, face-modulated amygdala-FG connections. However, the right hemisphere connections diminished at time 2, with the face modulation parameter no longer surviving Bonferroni correction. These findings suggest that amygdala strongly influences FG function during face perception, and that this influence is shaped by experience and stimulus salience. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Processing of Facial Emotion in Bipolar Depression and Euthymia.

    PubMed

    Robinson, Lucy J; Gray, John M; Burt, Mike; Ferrier, I Nicol; Gallagher, Peter

    2015-10-01

    Previous studies of facial emotion processing in bipolar disorder (BD) have reported conflicting findings. In independently conducted studies, we investigate facial emotion labeling in euthymic and depressed BD patients using tasks with static and dynamically morphed images of different emotions displayed at different intensities. Study 1 included 38 euthymic BD patients and 28 controls. Participants completed two tasks: labeling of static images of basic facial emotions (anger, disgust, fear, happy, sad) shown at different expression intensities; the Eyes Test (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), which involves recognition of complex emotions using only the eye region of the face. Study 2 included 53 depressed BD patients and 47 controls. Participants completed two tasks: labeling of "dynamic" facial expressions of the same five basic emotions; the Emotional Hexagon test (Young, Perret, Calder, Sprengelmeyer, & Ekman, 2002). There were no significant group differences on any measures of emotion perception/labeling, compared to controls. A significant group by intensity interaction was observed in both emotion labeling tasks (euthymia and depression), although this effect did not survive the addition of measures of executive function/psychomotor speed as covariates. Only 2.6-15.8% of euthymic patients and 7.8-13.7% of depressed patients scored below the 10th percentile of the controls for total emotion recognition accuracy. There was no evidence of specific deficits in facial emotion labeling in euthymic or depressed BD patients. Methodological variations-including mood state, sample size, and the cognitive demands of the tasks-may contribute significantly to the variability in findings between studies.

  16. Unsupervised learning of facial emotion decoding skills.

    PubMed

    Huelle, Jan O; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke

    2014-01-01

    Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant's response or the sender's true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practice effects often observed in cognitive tasks.

  17. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    PubMed

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Using Event Related Potentials to Explore Stages of Facial Affect Recognition Deficits in Schizophrenia

    PubMed Central

    Wynn, Jonathan K.; Lee, Junghee; Horan, William P.; Green, Michael F.

    2008-01-01

    Schizophrenia patients show impairments in identifying facial affect; however, it is not known at what stage facial affect processing is impaired. We evaluated 3 event-related potentials (ERPs) to explore stages of facial affect processing in schizophrenia patients. Twenty-six schizophrenia patients and 27 normal controls participated. In separate blocks, subjects identified the gender of a face, the emotion of a face, or if a building had 1 or 2 stories. Three ERPs were examined: (1) P100 to examine basic visual processing, (2) N170 to examine facial feature encoding, and (3) N250 to examine affect decoding. Behavioral performance on each task was also measured. Results showed that schizophrenia patients’ P100 was comparable to the controls during all 3 identification tasks. Both patients and controls exhibited a comparable N170 that was largest during processing of faces and smallest during processing of buildings. For both groups, the N250 was largest during the emotion identification task and smallest for the building identification task. However, the patients produced a smaller N250 compared with the controls across the 3 tasks. The groups did not differ in behavioral performance in any of the 3 identification tasks. The pattern of intact P100 and N170 suggest that patients maintain basic visual processing and facial feature encoding abilities. The abnormal N250 suggests that schizophrenia patients are less efficient at decoding facial affect features. Our results imply that abnormalities in the later stage of feature decoding could potentially underlie emotion identification deficits in schizophrenia. PMID:18499704

  19. Utilizing Current Commercial-off-the-Shelf Facial Recognition and Public Live Video Streaming to Enhance National Security

    DTIC Science & Technology

    2014-09-01

    biometrics technologies. 14. SUBJECT TERMS Facial recognition, systems engineering, live video streaming, security cameras, national security ...national security by sharing biometric facial recognition data in real-time utilizing infrastructures currently in place. It should be noted that the...9/11),law enforcement (LE) and Intelligence community (IC)authorities responsible for protecting citizens from threats against national security

  20. A New Method of Facial Expression Recognition Based on SPE Plus SVM

    NASA Astrophysics Data System (ADS)

    Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei

    A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.

  1. Abnormal Facial Emotion Recognition in Depression: Serial Testing in an Ultra-Rapid-Cycling Patient.

    ERIC Educational Resources Information Center

    George, Mark S.; Huggins, Teresa; McDermut, Wilson; Parekh, Priti I.; Rubinow, David; Post, Robert M.

    1998-01-01

    Mood disorder subjects have a selective deficit in recognizing human facial emotion. Whether the facial emotion recognition errors persist during normal mood states (i.e., are state vs. trait dependent) was studied in one male bipolar II patient. Results of five sessions are presented and discussed. (Author/EMK)

  2. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia

    PubMed Central

    Palermo, Romina; Willis, Megan L.; Rivolta, Davide; McKone, Elinor; Wilson, C. Ellie; Calder, Andrew J.

    2011-01-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. PMID:21333662

  3. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  4. Development of the Ability to Use Facial, Situational, and Vocal Cues to Infer Others' Affective States.

    ERIC Educational Resources Information Center

    Farber, Ellen A.; Moely, Barbara E.

    Results of two studies investigating children's abilities to use different kinds of cues to infer another's affective state are reported in this paper. In the first study, 48 children (3, 4, and 6 to 7 years of age) were given three different kinds of tasks (interpersonal task, facial recognition task, and vocal recognition task). A cross-age…

  5. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    PubMed

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.

  6. Face memory and face recognition in children and adolescents with attention deficit hyperactivity disorder: A systematic review.

    PubMed

    Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco

    2018-06-01

    This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Childhood Facial Recognition Predicts Adolescent Symptom Severity in Autism Spectrum Disorder.

    PubMed

    Eussen, Mart L J M; Louwerse, Anneke; Herba, Catherine M; Van Gool, Arthur R; Verheij, Fop; Verhulst, Frank C; Greaves-Lord, Kirstin

    2015-06-01

    Limited accuracy and speed in facial recognition (FR) and in the identification of facial emotions (IFE) have been shown in autism spectrum disorders (ASD). This study aimed at evaluating the predictive value of atypicalities in FR and IFE for future symptom severity in children with ASD. Therefore we performed a seven-year follow-up study in 87 children with ASD. FR and IFE were assessed in childhood (T1: age 6-12) using the Amsterdam Neuropsychological Tasks (ANT). Symptom severity was assessed using the Autism Diagnostic Observation Schedule (ADOS) in childhood and again seven years later during adolescence (T2: age 12-19). Multiple regression analyses were performed to investigate whether FR and IFE in childhood predicted ASD symptom severity in adolescence, while controlling for ASD symptom severity in childhood. We found that more accurate FR significantly predicted lower adolescent ASD symptom severity scores (ΔR(2) = .09), even when controlling for childhood ASD symptom severity. IFE was not a significant predictor of ASD symptom severity in adolescence. From these results it can be concluded, that in children with ASD the accuracy of FR in childhood is a relevant predictor of ASD symptom severity in adolescence. Test results on FR in children with ASD may have prognostic value regarding later symptom severity. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Unspoken vowel recognition using facial electromyogram.

    PubMed

    Arjunan, Sridhar P; Kumar, Dinesh K; Yau, Wai C; Weghorn, Hans

    2006-01-01

    The paper aims to identify speech using the facial muscle activity without the audio signals. The paper presents an effective technique that measures the relative muscle activity of the articulatory muscles. Five English vowels were used as recognition variables. This paper reports using moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles to segment the signal and identify the start and end of the utterance. The RMS of the signal between the start and end markers was integrated and normalised. This represented the relative muscle activity of the four muscles. These were classified using back propagation neural network to identify the speech. The technique was successfully used to classify 5 vowels into three classes and was not sensitive to the variation in speed and the style of speaking of the different subjects. The results also show that this technique was suitable for classifying the 5 vowels into 5 classes when trained for each of the subjects. It is suggested that such a technology may be used for the user to give simple unvoiced commands when trained for the specific user.

  9. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  10. Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2017-06-01

    The aim of this study was to present a straightforward implementation of facial recognition using the Microsoft Kinect v2 sensor for patient identification in a radiotherapy setting. A facial recognition system was created with the Microsoft Kinect v2 using a facial mapping library distributed with the Kinect v2 SDK as a basis for the algorithm. The system extracts 31 fiducial points representing various facial landmarks which are used in both the creation of a reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. ROC curves were plotted to display system performance and identify thresholds for match determination. In addition, system performance as a function of ambient light intensity was tested. Using optimized parameters in the matching algorithm, the sensitivity of the system for 5299 trials was 96.5% and the specificity was 96.7%. The results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a precollected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 s, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants and most robust when consistent ambient light conditions were maintained across both the reference recording session and subsequent real-time identification sessions. A facial recognition system can be implemented for patient identification using the Microsoft Kinect v2 sensor and the distributed SDK. In its present form, the system is accurate-if time consuming

  11. Predicting the Accuracy of Facial Affect Recognition: The Interaction of Child Maltreatment and Intellectual Functioning

    ERIC Educational Resources Information Center

    Shenk, Chad E.; Putnam, Frank W.; Noll, Jennie G.

    2013-01-01

    Previous research demonstrates that both child maltreatment and intellectual performance contribute uniquely to the accurate identification of facial affect by children and adolescents. The purpose of this study was to extend this research by examining whether child maltreatment affects the accuracy of facial recognition differently at varying…

  12. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

    PubMed

    Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J

    2011-04-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Context Effects on Facial Affect Recognition in Schizophrenia and Autism: Behavioral and Eye-Tracking Evidence.

    PubMed

    Sasson, Noah J; Pinkham, Amy E; Weittenhiller, Lauren P; Faso, Daniel J; Simpson, Claire

    2016-05-01

    Although Schizophrenia (SCZ) and Autism Spectrum Disorder (ASD) share impairments in emotion recognition, the mechanisms underlying these impairments may differ. The current study used the novel "Emotions in Context" task to examine how the interpretation and visual inspection of facial affect is modulated by congruent and incongruent emotional contexts in SCZ and ASD. Both adults with SCZ (n= 44) and those with ASD (n= 21) exhibited reduced affect recognition relative to typically-developing (TD) controls (n= 39) when faces were integrated within broader emotional scenes but not when they were presented in isolation, underscoring the importance of using stimuli that better approximate real-world contexts. Additionally, viewing faces within congruent emotional scenes improved accuracy and visual attention to the face for controls more so than the clinical groups, suggesting that individuals with SCZ and ASD may not benefit from the presence of complementary emotional information as readily as controls. Despite these similarities, important distinctions between SCZ and ASD were found. In every condition, IQ was related to emotion-recognition accuracy for the SCZ group but not for the ASD or TD groups. Further, only the ASD group failed to increase their visual attention to faces in incongruent emotional scenes, suggesting a lower reliance on facial information within ambiguous emotional contexts relative to congruent ones. Collectively, these findings highlight both shared and distinct social cognitive processes in SCZ and ASD that may contribute to their characteristic social disabilities. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Functional connectivity between amygdala and facial regions involved in recognition of facial threat

    PubMed Central

    Harada, Tokiko; Ruffman, Ted; Sadato, Norihiro; Iidaka, Tetsuya

    2013-01-01

    The recognition of threatening faces is important for making social judgments. For example, threatening facial features of defendants could affect the decisions of jurors during a trial. Previous neuroimaging studies using faces of members of the general public have identified a pivotal role of the amygdala in perceiving threat. This functional magnetic resonance imaging study used face photographs of male prisoners who had been convicted of first-degree murder (MUR) as threatening facial stimuli. We compared the subjective ratings of MUR faces with those of control (CON) faces and examined how they were related to brain activation, particularly, the modulation of the functional connectivity between the amygdala and other brain regions. The MUR faces were perceived to be more threatening than the CON faces. The bilateral amygdala was shown to respond to both MUR and CON faces, but subtraction analysis revealed no significant difference between the two. Functional connectivity analysis indicated that the extent of connectivity between the left amygdala and the face-related regions (i.e. the superior temporal sulcus, inferior temporal gyrus and fusiform gyrus) was correlated with the subjective threat rating for the faces. We have demonstrated that the functional connectivity is modulated by vigilance for threatening facial features. PMID:22156740

  15. Unsupervised learning of facial emotion decoding skills

    PubMed Central

    Huelle, Jan O.; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke

    2013-01-01

    Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant’s response or the sender’s true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple visual stimuli described in previous studies and practice effects often observed in cognitive tasks. PMID:24578686

  16. Psychopathy and facial emotion recognition ability in patients with bipolar affective disorder with or without delinquent behaviors.

    PubMed

    Demirel, Husrev; Yesilbas, Dilek; Ozver, Ismail; Yuksek, Erhan; Sahin, Feyzi; Aliustaoglu, Suheyla; Emul, Murat

    2014-04-01

    It is well known that patients with bipolar disorder are more prone to violence and have more criminal behaviors than general population. A strong relationship between criminal behavior and inability to empathize and imperceptions to other person's feelings and facial expressions increases the risk of delinquent behaviors. In this study, we aimed to investigate the deficits of facial emotion recognition ability in euthymic bipolar patients who committed an offense and compare with non-delinquent euthymic patients with bipolar disorder. Fifty-five euthymic patients with delinquent behaviors and 54 non-delinquent euthymic bipolar patients as a control group were included in the study. Ekman's Facial Emotion Recognition Test, sociodemographic data, Hare Psychopathy Checklist, Hamilton Depression Rating Scale and Young Mania Rating Scale were applied to both groups. There were no significant differences between case and control groups in the meaning of average age, gender, level of education, mean age onset of disease and suicide attempt (p>0.05). The three types of most committed delinquent behaviors in patients with euthymic bipolar disorder were as follows: injury (30.8%), threat or insult (20%) and homicide (12.7%). The best accurate percentage of identified facial emotion was "happy" (>99%, for both) while the worst misidentified facial emotion was "fear" in both groups (<50%, for both). The total accuracy rate of recognition toward facial emotions was significantly impaired in patients with delinquent behaviors than non-delinquent ones (p<0.05). The accuracy rate of recognizing the fear expressions was significantly worse in the case group than in the control group (p<0.05). In addition, it tended to be worse toward angry facial expressions in criminal euthymic bipolar patients. The response times toward happy, fear, disgusted and angry expressions had been significantly longer in the case group than in the control group (p<0.05). This study is the first

  17. Influence of gender in the recognition of basic facial expressions: A critical literature review

    PubMed Central

    Forni-Santos, Larissa; Osório, Flávia L

    2015-01-01

    AIM: To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. METHODS: We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. RESULTS: In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. CONCLUSION: The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer’s gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation. PMID:26425447

  18. Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M

    2014-09-01

    Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.

  19. Comparing Facial Emotional Recognition in Patients with Borderline Personality Disorder and Patients with Schizotypal Personality Disorder with a Normal Group

    PubMed Central

    Farsham, Aida; Abbaslou, Tahereh; Bidaki, Reza; Bozorg, Bonnie

    2017-01-01

    Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD) and schizotypal personality disorder (SPD). The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method: Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants. Discussion: The results of one way ANOVA and Scheffe’s post hoc test analysis revealed significant differences in neuropsychology assessment of facial emotional recognition between BPD and SPD patients with normal group (p = 0/001). A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008). A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(. The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain. PMID:28659980

  20. [Emotion Recognition in Patients with Peripheral Facial Paralysis - A Pilot Study].

    PubMed

    Konnerth, V; Mohr, G; von Piekartz, H

    2016-02-01

    The perception of emotions is an important component in enabling human beings to social interaction in everyday life. Thus, the ability to recognize the emotions of the other one's mime is a key prerequisite for this. The following study aimed at evaluating the ability of subjects with 'peripheral facial paresis' to perceive emotions in healthy individuals. A pilot study was conducted in which 13 people with 'peripheral facial paresis' participated. This assessment included the 'Facially Expressed Emotion Labeling-Test' (FEEL-Test), the 'Facial-Laterality-Recognition Test' (FLR-Test) and the 'Toronto-Alexithymie-Scale 26' (TAS 26). The results were compared with data of healthy people from other studies. In contrast to healthy patients, the subjects with 'facial paresis' show more difficulties in recognizing basic emotions; however the results are not significant. The participants show a significant lower level of speed (right/left: p<0.001) concerning the perception of facial laterality compared to healthy people. With regard to the alexithymia, the tested group reveals significantly higher results (p<0.001) compared to the unimpaired people. The present pilot study does not prove any impact on this specific patient group's ability to recognize emotions and facial laterality. For future studies the research question should be verified in a larger sample size. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  2. Target-context unitization effect on the familiarity-related FN400: a face recognition exclusion task.

    PubMed

    Guillaume, Fabrice; Etienne, Yann

    2015-03-01

    Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    PubMed

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Does cortisol modulate emotion recognition and empathy?

    PubMed

    Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja

    2016-04-01

    Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Facial Emotion Recognition Performance Differentiates Between Behavioral Variant Frontotemporal Dementia and Major Depressive Disorder.

    PubMed

    Chiu, Isabelle; Piguet, Olivier; Diehl-Schmid, Janine; Riedl, Lina; Beck, Johannes; Leyhe, Thomas; Holsboer-Trachsler, Edith; Kressig, Reto W; Berres, Manfred; Monsch, Andreas U; Sollberger, Marc

    Misdiagnosis of early behavioral variant frontotemporal dementia (bvFTD) with major depressive disorder (MDD) is not uncommon due to overlapping symptoms. The aim of this study was to improve the discrimination between these disorders using a novel facial emotion perception task. In this prospective cohort study (July 2013-March 2016), we compared 25 patients meeting Rascovsky diagnostic criteria for bvFTD, 20 patients meeting DSM-IV criteria for MDD, 21 patients meeting McKhann diagnostic criteria for Alzheimer's disease dementia, and 31 healthy participants on a novel emotion intensity rating task comprising morphed low-intensity facial stimuli. Participants were asked to rate the intensity of morphed faces on the congruent basic emotion (eg, rating on sadness when sad face is shown) and on the 5 incongruent basic emotions (eg, rating on each of the other basic emotions when sad face is shown). While bvFTD patients underrated congruent emotions (P < .01), they also overrated incongruent emotions (P < .001), resulting in confusion of facial emotions. In contrast, MDD patients overrated congruent negative facial emotions (P < .001), but not incongruent facial emotions. Accordingly, ratings of congruent and incongruent emotions highly discriminated between bvFTD and MDD patients, ranging from area under the curve (AUC) = 93% to AUC = 98%. Further, an almost complete discrimination (AUC = 99%) was achieved by contrasting the 2 rating types. In contrast, Alzheimer's disease dementia patients perceived emotions similarly to healthy participants, indicating no impact of cognitive impairment on rating scores. Our congruent and incongruent facial emotion intensity rating task allows a detailed assessment of facial emotion perception in patient populations. By using this simple task, we achieved an almost complete discrimination between bvFTD and MDD, potentially helping improve the diagnostic certainty in early bvFTD. © Copyright 2018 Physicians Postgraduate Press, Inc.

  6. Emotion Recognition in Face and Body Motion in Bulimia Nervosa.

    PubMed

    Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate

    2017-11-01

    Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  7. Effects of levodopa-carbidopa-entacapone and smoked cocaine on facial affect recognition in cocaine smokers.

    PubMed

    Bedi, Gillinder; Shiffrin, Laura; Vadhan, Nehal P; Nunes, Edward V; Foltin, Richard W; Bisaga, Adam

    2016-04-01

    In addition to difficulties in daily social functioning, regular cocaine users have decrements in social processing (the cognitive and affective processes underlying social behavior) relative to non-users. Little is known, however, about the effects of clinically-relevant pharmacological agents, such as cocaine and potential treatment medications, on social processing in cocaine users. Such drug effects could potentially alleviate or compound baseline social processing decrements in cocaine abusers. Here, we assessed the individual and combined effects of smoked cocaine and a potential treatment medication, levodopa-carbidopa-entacapone (LCE), on facial emotion recognition in cocaine smokers. Healthy non-treatment-seeking cocaine smokers (N = 14; two female) completed this 11-day inpatient within-subjects study. Participants received LCE (titrated to 400mg/100mg/200mg b.i.d.) for five days with the remaining time on placebo. The order of medication administration was counterbalanced. Facial emotion recognition was measured twice during target LCE dosing and twice on placebo: once without cocaine and once after repeated cocaine doses. LCE increased the response threshold for identification of facial fear, biasing responses away from fear identification. Cocaine had no effect on facial emotion recognition. Results highlight the possibility for candidate pharmacotherapies to have unintended impacts on social processing in cocaine users, potentially exacerbating already existing difficulties in this population. © The Author(s) 2016.

  8. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  9. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    PubMed

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  10. Recognition and Posing of Emotional Expressions by Abused Children and Their Mothers.

    ERIC Educational Resources Information Center

    Camras, Linda A.; And Others

    1988-01-01

    A total of 20 abused and 20 nonabused pairs of children of three-seven years and their mothers participated in a facial expression posing task and a facial expression recognition task. Findings suggest that abused children may not observe as often as nonabused children do the easily interpreted voluntary displays of emotion by their mothers. (RH)

  11. Cross-domain expression recognition based on sparse coding and transfer learning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Zhang, Weiyi; Huang, Yong

    2017-05-01

    Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.

  12. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    PubMed

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    PubMed

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  14. Cortical representation of facial and tongue movements: a task functional magnetic resonance imaging study.

    PubMed

    Xiao, Fu-Long; Gao, Pei-Yi; Qian, Tian-Yi; Sui, Bin-Bin; Xue, Jing; Zhou, Jian; Lin, Yan

    2017-05-01

    Functional magnetic resonance imaging (fMRI) mapping can present the activated cortical area during movement, while little is known about precise location in facial and tongue movements. To investigate the representation of facial and tongue movements by task fMRI. Twenty right-handed healthy subjects were underwent block design task fMRI examination. Task movements included lip pursing, cheek bulging, grinning and vertical tongue excursion. Statistical parametric mapping (SPM8) was applied to analysis the data. One-sample t-test was used to calculate the common activation area between facial and tongue movements. Also, paired t-test was used to test for areas of over- or underactivation in tongue movement compared with each group of facial movements. The common areas within facial and tongue movements suggested the similar motor circuits of activation in both movements. Prior activation in tongue movement was situated laterally and inferiorly in sensorimotor area relative to facial movements. Prior activation of tongue movement was investigated in left superior parietal lobe relative to lip pursing. Also, prior activation in bilateral cuneus lobe in grinning compared with tongue movement was detected. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  15. Emotional facial expressions differentially influence predictions and performance for face recognition.

    PubMed

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  16. [Emotional facial recognition difficulties as primary deficit in children with attention deficit hyperactivity disorder: a systematic review].

    PubMed

    Rodrigo-Ruiz, D; Perez-Gonzalez, J C; Cejudo, J

    2017-08-16

    It has recently been warned that children with attention deficit hyperactivity disorder (ADHD) show a deficit in emotional competence and emotional intelligence, specifically in their ability to emotional recognition. A systematic review of the scientific literature in reference to the emotional recognition of facial expressions in children with ADHD is presented in order to establish or rule the existence of emotional deficits as primary dysfunction in this disorder and, where appropriate, the effect size of the differences against normal development or neurotypical children. The results reveal the recent interest in the issue and the lack of information. Although there is no complete agreement, most of the studies show that emotional recognition of facial expressions is affected in children with ADHD, showing them significantly less accurate than children from control groups in recognizing emotions communicated through facial expressions. A part of these studies make comparisons on the recognition of different discrete emotions; having observed that children with ADHD tend to a greater difficulty recognizing negative emotions, especially anger, fear, and disgust. These results have direct implications for the educational and clinical diagnosis of ADHD; and for the educational intervention for children with ADHD, emotional education might entail an advantageous aid.

  17. Anxiety disorders in adolescence are associated with impaired facial expression recognition to negative valence.

    PubMed

    Jarros, Rafaela Behs; Salum, Giovanni Abrahão; Belem da Silva, Cristiano Tschiedel; Toazza, Rudineia; de Abreu Costa, Marianna; Fumagalli de Salles, Jerusa; Manfro, Gisele Gus

    2012-02-01

    The aim of the present study was to test the ability of adolescents with a current anxiety diagnosis to recognize facial affective expressions, compared to those without an anxiety disorder. Forty cases and 27 controls were selected from a larger cross sectional community sample of adolescents, aged from 10 to 17 years old. Adolescent's facial recognition of six human emotions (sadness, anger, disgust, happy, surprise and fear) and neutral faces was assessed through a facial labeling test using Ekman's Pictures of Facial Affect (POFA). Adolescents with anxiety disorders had a higher mean number of errors in angry faces as compared to controls: 3.1 (SD=1.13) vs. 2.5 (SD=2.5), OR=1.72 (CI95% 1.02 to 2.89; p=0.040). However, they named neutral faces more accurately than adolescents without anxiety diagnosis: 15% of cases vs. 37.1% of controls presented at least one error in neutral faces, OR=3.46 (CI95% 1.02 to 11.7; p=0.047). No differences were found considering other human emotions or on the distribution of errors in each emotional face between the groups. Our findings support an anxiety-mediated influence on the recognition of facial expressions in adolescence. These difficulty in recognizing angry faces and more accuracy in naming neutral faces may lead to misinterpretation of social clues and can explain some aspects of the impairment in social interactions in adolescents with anxiety disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    PubMed Central

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (p<.05, r=.44) and more accurate at identifying disgust (p<.05, r=.39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p’s<.05, r’s≥.38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  19. The effects of acute alcohol intoxication on the cognitive mechanisms underlying false facial recognition.

    PubMed

    Colloff, Melissa F; Flowe, Heather D

    2016-06-01

    False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.

  20. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  1. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  2. Effects of task demands on the early neural processing of fearful and happy facial expressions

    PubMed Central

    Itier, Roxane J.; Neath-Tavares, Karly N.

    2017-01-01

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200–350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150–350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. PMID:28315309

  3. Identifying and detecting facial expressions of emotion in peripheral vision.

    PubMed

    Smith, Fraser W; Rossit, Stephanie

    2018-01-01

    Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.

  4. Identifying and detecting facial expressions of emotion in peripheral vision

    PubMed Central

    Rossit, Stephanie

    2018-01-01

    Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus. PMID:29847562

  5. The effects of gender and COMT Val158Met polymorphism on fearful facial affect recognition: a fMRI study.

    PubMed

    Kempton, Matthew J; Haldane, Morgan; Jogia, Jigar; Christodoulou, Tessa; Powell, John; Collier, David; Williams, Steven C R; Frangou, Sophia

    2009-04-01

    The functional catechol-O-methyltransferase (COMT Val108/158Met) polymorphism has been shown to have an impact on tasks of executive function, memory and attention and recently, tasks with an affective component. As oestrogen reduces COMT activity, we focused on the interaction between gender and COMT genotype on brain activations during an affective processing task. We used functional MRI (fMRI) to record brain activations from 74 healthy subjects who engaged in a facial affect recognition task; subjects viewed and identified fearful compared to neutral faces. There was no main effect of the COMT polymorphism, gender or genotypexgender interaction on task performance. We found a significant effect of gender on brain activations in the left amygdala and right temporal pole, where females demonstrated increased activations over males. Within these regions, Val/Val carriers showed greater signal magnitude compared to Met/Met carriers, particularly in females. The COMT Val108/158Met polymorphism impacts on gender-related patterns of activation in limbic and paralimbic regions but the functional significance of any oestrogen-related COMT inhibition appears modest.

  6. The Impact of Sex Differences on Odor Identification and Facial Affect Recognition in Patients with Schizophrenia Spectrum Disorders.

    PubMed

    Mossaheb, Nilufar; Kaufmann, Rainer M; Schlögelhofer, Monika; Aninilkumparambil, Thushara; Himmelbauer, Claudia; Gold, Anna; Zehetmayer, Sonja; Hoffmann, Holger; Traue, Harald C; Aschauer, Harald

    2018-01-01

    Social interactive functions such as facial emotion recognition and smell identification have been shown to differ between women and men. However, little is known about how these differences are mirrored in patients with schizophrenia and how these abilities interact with each other and with other clinical variables in patients vs. healthy controls. Standardized instruments were used to assess facial emotion recognition [Facially Expressed Emotion Labelling (FEEL)] and smell identification [University of Pennsylvania Smell Identification Test (UPSIT)] in 51 patients with schizophrenia spectrum disorders and 79 healthy controls; furthermore, working memory functions and clinical variables were assessed. In both the univariate and the multivariate results, illness showed a significant influence on UPSIT and FEEL. The inclusion of age and working memory in the MANOVA resulted in a differential effect with sex and working memory as remaining significant factors. Duration of illness was correlated with both emotion recognition and smell identification in men only, whereas immediate general psychopathology and negative symptoms were associated with emotion recognition only in women. Being affected by schizophrenia spectrum disorder impacts one's ability to correctly recognize facial affects and identify odors. Converging evidence suggests a link between the investigated basic and social cognitive abilities in patients with schizophrenia spectrum disorders with a strong contribution of working memory and differential effects of modulators in women vs. men.

  7. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Identity recognition and happy and sad facial expression recall: influence of depressive symptoms.

    PubMed

    Jermann, Françoise; van der Linden, Martial; D'Argembeau, Arnaud

    2008-05-01

    Relatively few studies have examined memory bias for social stimuli in depression or dysphoria. The aim of this study was to investigate the influence of depressive symptoms on memory for facial information. A total of 234 participants completed the Beck Depression Inventory II and a task examining memory for facial identity and expression of happy and sad faces. For both facial identity and expression, the recollective experience was measured with the Remember/Know/Guess procedure (Gardiner & Richardson-Klavehn, 2000). The results show no major association between depressive symptoms and memory for identities. However, dysphoric individuals consciously recalled (Remember responses) more sad facial expressions than non-dysphoric individuals. These findings suggest that sad facial expressions led to more elaborate encoding, and thereby better recollection, in dysphoric individuals.

  9. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    PubMed

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  10. Skilful communication: Emotional facial expressions recognition in very old adults.

    PubMed

    María Sarabia-Cobo, Carmen; Navas, María José; Ellgring, Heiner; García-Rodríguez, Beatriz

    2016-02-01

    The main objective of this study was to assess the changes associated with ageing in the ability to identify emotional facial expressions and to what extent such age-related changes depend on the intensity with which each basic emotion is manifested. A randomised controlled trial carried out on 107 subjects who performed a six alternative forced-choice emotional expressions identification task. The stimuli consisted of 270 virtual emotional faces expressing the six basic emotions (happiness, sadness, surprise, fear, anger and disgust) at three different levels of intensity (low, pronounced and maximum). The virtual faces were generated by facial surface changes, as described in the Facial Action Coding System (FACS). A progressive age-related decline in the ability to identify emotional facial expressions was detected. The ability to recognise the intensity of expressions was one of the most strongly impaired variables associated with age, although the valence of emotion was also poorly identified, particularly in terms of recognising negative emotions. Nurses should be mindful of how ageing affects communication with older patients. In this study, very old adults displayed more difficulties in identifying emotional facial expressions, especially low intensity expressions and those associated with difficult emotions like disgust or fear. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Effects of task demands on the early neural processing of fearful and happy facial expressions.

    PubMed

    Itier, Roxane J; Neath-Tavares, Karly N

    2017-05-15

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    ERIC Educational Resources Information Center

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  13. The Effects of Sex of Subject, Sex and Attractiveness of Photo on Facial Recognition.

    ERIC Educational Resources Information Center

    Carroo, Agatha W.; Mozingo, R.

    1989-01-01

    Assessed effect of sex of subject, and sex and attractiveness of photo on facial recognition with 25 male and 25 female college students. Found male subjects performed better with male faces with d' prime scores. (Author/ABL)

  14. Convolutional neural networks and face recognition task

    NASA Astrophysics Data System (ADS)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  15. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  16. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    PubMed

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  17. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    NASA Astrophysics Data System (ADS)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  18. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    ERIC Educational Resources Information Center

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  19. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills.

    PubMed

    Bennetts, Rachel J; Mole, Joseph; Bate, Sarah

    2017-09-01

    Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.

  20. EEG based topography analysis in string recognition task

    NASA Astrophysics Data System (ADS)

    Ma, Xiaofei; Huang, Xiaolin; Shen, Yuxiaotong; Qin, Zike; Ge, Yun; Chen, Ying; Ning, Xinbao

    2017-03-01

    Vision perception and recognition is a complex process, during which different parts of brain are involved depending on the specific modality of the vision target, e.g. face, character, or word. In this study, brain activities in string recognition task compared with idle control state are analyzed through topographies based on multiple measurements, i.e. sample entropy, symbolic sample entropy and normalized rhythm power, extracted from simultaneously collected scalp EEG. Our analyses show that, for most subjects, both symbolic sample entropy and normalized gamma power in string recognition task are significantly higher than those in idle state, especially at locations of P4, O2, T6 and C4. It implies that these regions are highly involved in string recognition task. Since symbolic sample entropy measures complexity, from the perspective of new information generation, and normalized rhythm power reveals the power distributions in frequency domain, complementary information about the underlying dynamics can be provided through the two types of indices.

  1. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    PubMed

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  2. Emotion recognition deficits associated with ventromedial prefrontal cortex lesions are improved by gaze manipulation.

    PubMed

    Wolf, Richard C; Pujara, Maia; Baskaya, Mustafa K; Koenigs, Michael

    2016-09-01

    Facial emotion recognition is a critical aspect of human communication. Since abnormalities in facial emotion recognition are associated with social and affective impairment in a variety of psychiatric and neurological conditions, identifying the neural substrates and psychological processes underlying facial emotion recognition will help advance basic and translational research on social-affective function. Ventromedial prefrontal cortex (vmPFC) has recently been implicated in deploying visual attention to the eyes of emotional faces, although there is mixed evidence regarding the importance of this brain region for recognition accuracy. In the present study of neurological patients with vmPFC damage, we used an emotion recognition task with morphed facial expressions of varying intensities to determine (1) whether vmPFC is essential for emotion recognition accuracy, and (2) whether instructed attention to the eyes of faces would be sufficient to improve any accuracy deficits. We found that vmPFC lesion patients are impaired, relative to neurologically healthy adults, at recognizing moderate intensity expressions of anger and that recognition accuracy can be improved by providing instructions of where to fixate. These results suggest that vmPFC may be important for the recognition of facial emotion through a role in guiding visual attention to emotionally salient regions of faces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Sex differences in emotion recognition: Evidence for a small overall female superiority on facial disgust.

    PubMed

    Connolly, Hannah L; Lefevre, Carmen E; Young, Andrew W; Lewis, Gary J

    2018-05-21

    Although it is widely believed that females outperform males in the ability to recognize other people's emotions, this conclusion is not well supported by the extant literature. The current study sought to provide a strong test of the female superiority hypothesis by investigating sex differences in emotion recognition for five basic emotions using stimuli well-calibrated for individual differences assessment, across two expressive domains (face and body), and in a large sample (N = 1,022: Study 1). We also assessed the stability and generalizability of our findings with two independent replication samples (N = 303: Study 2, N = 634: Study 3). In Study 1, we observed that females were superior to males in recognizing facial disgust and sadness. In contrast, males were superior to females in recognizing bodily happiness. The female superiority for recognition of facial disgust was replicated in Studies 2 and 3, and this observation also extended to an independent stimulus set in Study 2. No other sex differences were stable across studies. These findings provide evidence for the presence of sex differences in emotion recognition ability, but show that these differences are modest in magnitude and appear to be limited to facial disgust. We discuss whether this sex difference may reflect human evolutionary imperatives concerning reproductive fitness and child care. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Transfer-appropriate processing in recognition memory: perceptual and conceptual effects on recognition memory depend on task demands.

    PubMed

    Parks, Colleen M

    2013-07-01

    Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which suggest that the extent to which perceptual fluency matters on a recognition test depends in large part on the task demands. A test that recruits perceptual processing for discrimination should show greater perceptual effects and smaller conceptual effects than standard recognition, similar to the pattern of effects found in perceptual implicit memory tasks. This idea was tested in the current experiment by crossing a levels of processing manipulation with a modality manipulation on a series of recognition tests that ranged from conceptual (standard recognition) to very perceptually demanding (a speeded recognition test with degraded stimuli). Results showed that the levels of processing effect decreased and the effect of modality increased when tests were made perceptually demanding. These results support the idea that surface-level features influence performance on recognition tests when they are made salient by the task demands. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. Emotion recognition in Parkinson's disease: Static and dynamic factors.

    PubMed

    Wasser, Cory I; Evans, Felicity; Kempnich, Clare; Glikmann-Johnston, Yifat; Andrews, Sophie C; Thyagarajan, Dominic; Stout, Julie C

    2018-02-01

    The authors tested the hypothesis that Parkinson's disease (PD) participants would perform better in an emotion recognition task with dynamic (video) stimuli compared to a task using only static (photograph) stimuli and compared performances on both tasks to healthy control participants. In a within-subjects study, 21 PD participants and 20 age-matched healthy controls performed both static and dynamic emotion recognition tasks. The authors used a 2-way analysis of variance (controlling for individual participant variance) to determine the effect of group (PD, control) on emotion recognition performance in static and dynamic facial recognition tasks. Groups did not significantly differ in their performances on the static and dynamic tasks; however, the trend was suggestive that PD participants performed worse than controls. PD participants may have subtle emotion recognition deficits that are not ameliorated by the addition of contextual cues, similar to those found in everyday scenarios. Consistent with previous literature, the results suggest that PD participants may have underlying emotion recognition deficits, which may impact their social functioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition

    NASA Astrophysics Data System (ADS)

    Yin, Xi; Liu, Xiaoming

    2018-02-01

    This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.

  7. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  8. Meta-Analysis of Facial Emotion Recognition in Behavioral Variant Frontotemporal Dementia: Comparison With Alzheimer Disease and Healthy Controls.

    PubMed

    Bora, Emre; Velakoulis, Dennis; Walterfang, Mark

    2016-07-01

    Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD). The aim of this meta-analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta-analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d = 1.81) and AD (d = 1.23). In bvFTD, recognition of negative emotions, especially anger (d = 1.48) and disgust (d = 1.41), were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social-cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD. © The Author(s) 2016.

  9. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  10. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  11. Attention to Social Stimuli and Facial Identity Recognition Skills in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Wilson, C. E.; Brock, J.; Palermo, R.

    2010-01-01

    Background: Previous research suggests that individuals with autism spectrum disorder (ASD) have a reduced preference for viewing social stimuli in the environment and impaired facial identity recognition. Methods: Here, we directly tested a link between these two phenomena in 13 ASD children and 13 age-matched typically developing (TD) controls.…

  12. Computerised working memory based cognitive remediation therapy does not affect Reading the Mind in the Eyes test performance or neural activity during a Facial Emotion Recognition test in psychosis.

    PubMed

    Mothersill, David; Dillon, Rachael; Hargreaves, April; Castorina, Marco; Furey, Emilia; Fagan, Andrew J; Meaney, James F; Fitzmaurice, Brian; Hallahan, Brian; McDonald, Colm; Wykes, Til; Corvin, Aiden; Robertson, Ian H; Donohoe, Gary

    2018-05-27

    Working memory based cognitive remediation therapy (CT) for psychosis has recently been associated with broad improvements in performance on untrained tasks measuring working memory, episodic memory and IQ, and changes in associated brain regions. However, it is unclear if these improvements transfer to the domain of social cognition and neural activity related to performance on social cognitive tasks. We examined performance on the Reading the Mind in the Eyes test (Eyes test) in a large sample of participants with psychosis who underwent working memory based CT (N = 43) compared to a Control Group of participants with psychosis (N = 35). In a subset of this sample, we used functional magnetic resonance imaging (fMRI) to examine changes in neural activity during a facial emotion recognition task in participants who underwent CT (N = 15) compared to a Control Group (N = 15). No significant effects of CT were observed on Eyes test performance or on neural activity during facial emotion recognition, either at p<0.05 family-wise error, or at a p<0.001 uncorrected threshold, within a priori social cognitive regions of interest. This study suggests that working memory based CT does not significantly impact an aspect of social cognition which was measured behaviourally and neurally. It provides further evidence that deficits in the ability to decode mental state from facial expressions are dissociable from working memory deficits, and suggests that future CT programs should target social cognition in addition to working memory for the purposes of further enhancing social function. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.

    PubMed

    Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R

    2014-01-01

    Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.

  14. Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.

    PubMed

    Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz

    2015-04-01

    Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. (c) 2015 APA, all rights reserved).

  15. The role of skin colour in face recognition.

    PubMed

    Bar-Haim, Yair; Saidel, Talia; Yovel, Galit

    2009-01-01

    People have better memory for faces from their own racial group than for faces from other races. It has been suggested that this own-race recognition advantage depends on an initial categorisation of faces into own and other race based on racial markers, resulting in poorer encoding of individual variations in other-race faces. Here, we used a study--test recognition task with stimuli in which the skin colour of African and Caucasian faces was manipulated to produce four categories representing the cross-section between skin colour and facial features. We show that, despite the notion that skin colour plays a major role in categorising faces into own and other-race faces, its effect on face recognition is minor relative to differences across races in facial features.

  16. Clustered Multi-Task Learning for Automatic Radar Target Recognition

    PubMed Central

    Li, Cong; Bao, Weimin; Xu, Luping; Zhang, Hua

    2017-01-01

    Model training is a key technique for radar target recognition. Traditional model training algorithms in the framework of single task leaning ignore the relationships among multiple tasks, which degrades the recognition performance. In this paper, we propose a clustered multi-task learning, which can reveal and share the multi-task relationships for radar target recognition. To further make full use of these relationships, the latent multi-task relationships in the projection space are taken into consideration. Specifically, a constraint term in the projection space is proposed, the main idea of which is that multiple tasks within a close cluster should be close to each other in the projection space. In the proposed method, the cluster structures and multi-task relationships can be autonomously learned and utilized in both of the original and projected space. In view of the nonlinear characteristics of radar targets, the proposed method is extended to a non-linear kernel version and the corresponding non-linear multi-task solving method is proposed. Comprehensive experimental studies on simulated high-resolution range profile dataset and MSTAR SAR public database verify the superiority of the proposed method to some related algorithms. PMID:28953267

  17. Facial emotion recognition deficits: The new face of schizophrenia

    PubMed Central

    Behere, Rishikesh V.

    2015-01-01

    Schizophrenia has been classically described to have positive, negative, and cognitive symptom dimension. Emerging evidence strongly supports a fourth dimension of social cognitive symptoms with facial emotion recognition deficits (FERD) representing a new face in our understanding of this complex disorder. FERD have been described to be one among the important deficits in schizophrenia and could be trait markers for the disorder. FERD are associated with socio-occupational dysfunction and hence are of important clinical relevance. This review discusses FERD in schizophrenia, challenges in its assessment in our cultural context, its implications in understanding neurobiological mechanisms and clinical applications. PMID:26600574

  18. Test-retest reliability and task order effects of emotional cognitive tests in healthy subjects.

    PubMed

    Adams, Thomas; Pounder, Zoe; Preston, Sally; Hanson, Andy; Gallagher, Peter; Harmer, Catherine J; McAllister-Williams, R Hamish

    2016-11-01

    Little is known of the retest reliability of emotional cognitive tasks or the impact of using different tasks employing similar emotional stimuli within a battery. We investigated this in healthy subjects. We found improved overall performance in an emotional attentional blink task (EABT) with repeat testing at one hour and one week compared to baseline, but the impact of an emotional stimulus on performance was unchanged. Similarly, performance on a facial expression recognition task (FERT) was better one week after a baseline test, though the relative effect of specific emotions was unaltered. There was no effect of repeat testing on an emotional word categorising, recall and recognition task. We found no difference in performance in the FERT and EABT irrespective of task order. We concluded that it is possible to use emotional cognitive tasks in longitudinal studies and combine tasks using emotional facial stimuli in a single battery.

  19. Deficits in recognizing disgust facial expressions and Internet addiction: Perceived stress as a mediator.

    PubMed

    Chen, Zhongting; Poon, Kai-Tak; Cheng, Cecilia

    2017-08-01

    Studies have examined social maladjustment among individuals with Internet addiction, but little is known about their deficits in specific social skills and the underlying psychological mechanisms. The present study filled these gaps by (a) establishing a relationship between deficits in facial expression recognition and Internet addiction, and (b) examining the mediating role of perceived stress that explains this hypothesized relationship. Ninety-seven participants completed validated questionnaires that assessed their levels of Internet addiction and perceived stress, and performed a computer-based task that measured their facial expression recognition. The results revealed a positive relationship between deficits in recognizing disgust facial expression and Internet addiction, and this relationship was mediated by perceived stress. However, the same findings did not apply to other facial expressions. Ad hoc analyses showed that recognizing disgust was more difficult than recognizing other facial expressions, reflecting that the former task assesses a social skill that requires cognitive astuteness. The present findings contribute to the literature by identifying a specific social skill deficit related to Internet addiction and by unveiling a psychological mechanism that explains this relationship, thus providing more concrete guidelines for practitioners to strengthen specific social skills that mitigate both perceived stress and Internet addiction. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  20. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  1. Tryptophan depletion decreases the recognition of fear in female volunteers.

    PubMed

    Harmer, C J; Rogers, R D; Tunbridge, E; Cowen, P J; Goodwin, G M

    2003-06-01

    Serotonergic processes have been implicated in the modulation of fear conditioning in humans, postulated to occur at the level of the amygdala. The processing of other fear-relevant cues, such as facial expressions, has also been associated with amygdala function, but an effect of serotonin depletion on these processes has not been assessed. The present study investigated the effects of reducing serotonin function, using acute tryptophan depletion, on the recognition of basic facial expressions of emotions in healthy male and female volunteers. A double-blind between-groups design was used, with volunteers being randomly allocated to receive an amino acid drink specifically lacking tryptophan or a control mixture containing a balanced mixture of these amino acids. Participants were given a facial expression recognition task 5 h after drink administration. This task featured examples of six basic emotions (fear, anger, disgust, surprise, sadness and happiness) that had been morphed between each full emotion and neutral in 10% steps. As a control, volunteers were given a famous face classification task matched in terms of response selection and difficulty level. Tryptophan depletion significantly impaired the recognition of fearful facial expressions in female, but not male, volunteers. This was specific since recognition of other basic emotions was comparable in the two groups. There was also no effect of tryptophan depletion on the classification of famous faces or on subjective state ratings of mood or anxiety. These results confirm a role for serotonin in the processing of fear related cues, and in line with previous findings also suggest greater effects of tryptophan depletion in female volunteers. Although acute tryptophan depletion does not typically affect mood in healthy subjects, the present results suggest that subtle changes in the processing of emotional material may occur with this manipulation of serotonin function.

  2. The interaction between embodiment and empathy in facial expression recognition

    PubMed Central

    Jospe, Karine; Flöel, Agnes; Lavidor, Michal

    2018-01-01

    Abstract Previous research has demonstrated that the Action-Observation Network (AON) is involved in both emotional-embodiment (empathy) and action-embodiment mechanisms. In this study, we hypothesized that interfering with the AON will impair action recognition and that this impairment will be modulated by empathy levels. In Experiment 1 (n = 90), participants were asked to recognize facial expressions while their facial motion was restricted. In Experiment 2 (n = 50), we interfered with the AON by applying transcranial Direct Current Stimulation to the motor cortex. In both experiments, we found that interfering with the AON impaired the performance of participants with high empathy levels; however, for the first time, we demonstrated that the interference enhanced the performance of participants with low empathy. This novel finding suggests that the embodiment module may be flexible, and that it can be enhanced in individuals with low empathy by simple manipulation of motor activation. PMID:29378022

  3. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  4. Action and emotion recognition from point light displays: an investigation of gender differences.

    PubMed

    Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole

    2011-01-01

    Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree

  5. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature.

    PubMed

    Tender, Jennifer A F; Ferreira, Carlos R

    2018-04-13

    Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls.

  6. Witnesses' blindness for their own facial recognition decisions: a field study.

    PubMed

    Sagana, Anna; Sauerland, Melanie; Merckelbach, Harald

    2013-01-01

    In a field study, we examined choice blindness for eyewitnesses' facial recognition decisions. Seventy-one pedestrians were engaged in a conversation by two experimenters who pretended to be tourists in the center of a European city. After a short interval, pedestrians were asked to identify the two experimenters from separate simultaneous six-person photo lineups. Following each of the two forced-choice recognition decisions, they were confronted with their selection and asked to motivate their decision. However, for one of the recognition decisions, the chosen lineup member was exchanged with a previously unidentified member. Blindness for this identity manipulation occurred at the rate of 40.8%. Furthermore, the detection rate varied as a function of similarity (high vs. low) between the original choice and the manipulated outcome. Finally, choice manipulations undermined the confidence-accuracy relation for detectors to a greater degree than for blind participants. Stimulus ambiguity is discussed as a moderator of choice blindness. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Memory Asymmetry of Forward and Backward Associations in Recognition Tasks

    PubMed Central

    Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han

    2013-01-01

    There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326

  8. Emotion recognition in body dysmorphic disorder: application of the Reading the Mind in the Eyes Task.

    PubMed

    Buhlmann, Ulrike; Winter, Anna; Kathmann, Norbert

    2013-03-01

    Body dysmorphic disorder (BDD) is characterized by perceived appearance-related defects, often tied to aspects of the face or head (e.g., acne). Deficits in decoding emotional expressions have been examined in several psychological disorders including BDD. Previous research indicates that BDD is associated with impaired facial emotion recognition, particularly in situations that involve the BDD sufferer him/herself. The purpose of this study was to further evaluate the ability to read other people's emotions among 31 individuals with BDD, and 31 mentally healthy controls. We applied the Reading the Mind in the Eyes task, in which participants are presented with a series of pairs of eyes, one at a time, and are asked to identify the emotion that describes the stimulus best. The groups did not differ with respect to decoding other people's emotions by looking into their eyes. Findings are discussed in light of previous research examining emotion recognition in BDD. Copyright © 2013. Published by Elsevier Ltd.

  9. Biased recognition of facial affect in patients with major depressive disorder reflects clinical state.

    PubMed

    Münkler, Paula; Rothkirch, Marcus; Dalati, Yasmin; Schmack, Katharina; Sterzer, Philipp

    2015-01-01

    Cognitive theories of depression posit that perception is negatively biased in depressive disorder. Previous studies have provided empirical evidence for this notion, but left open the question whether the negative perceptual bias reflects a stable trait or the current depressive state. Here we investigated the stability of negatively biased perception over time. Emotion perception was examined in patients with major depressive disorder (MDD) and healthy control participants in two experiments. In the first experiment subjective biases in the recognition of facial emotional expressions were assessed. Participants were presented with faces that were morphed between sad and neutral and happy expressions and had to decide whether the face was sad or happy. The second experiment assessed automatic emotion processing by measuring the potency of emotional faces to gain access to awareness using interocular suppression. A follow-up investigation using the same tests was performed three months later. In the emotion recognition task, patients with major depression showed a shift in the criterion for the differentiation between sad and happy faces: In comparison to healthy controls, patients with MDD required a greater intensity of the happy expression to recognize a face as happy. After three months, this negative perceptual bias was reduced in comparison to the control group. The reduction in negative perceptual bias correlated with the reduction of depressive symptoms. In contrast to previous work, we found no evidence for preferential access to awareness of sad vs. happy faces. Taken together, our results indicate that MDD-related perceptual biases in emotion recognition reflect the current clinical state rather than a stable depressive trait.

  10. Selective impairment of facial recognition due to a haematoma restricted to the right fusiform and lateral occipital region

    PubMed Central

    Wada, Y; Yamamoto, T

    2001-01-01

    A 67 year old right handed Japanese man developed prosopagnosia caused by a haemorrhage. His only deficit was the inability to perceive and discriminate unfamiliar faces, and to recognise familiar faces. He did not show deficits in visual or visuospatial perception of non-facial stimuli, alexia, visual agnosia, or topographical disorientation. Brain MRI showed a haematoma limited to the right fusiform and the lateral occipital region. Single photon emission computed tomography confirmed that there was no decreased blood flow in the opposite left cerebral hemisphere. The present case indicates that a well placed small right fusiform gyrus and the adjacent area can cause isolated impairment of facial recognition. As far as we know, there has been no published case that has demonstrated this exact lesion site, which was indicated by recent functional MRI studies as the most critical area in facial recognition.

 PMID:11459906

  11. Cognitive mechanisms of false facial recognition in older adults.

    PubMed

    Edmonds, Emily C; Glisky, Elizabeth L; Bartlett, James C; Rapcsak, Steven Z

    2012-03-01

    Older adults show elevated false alarm rates on recognition memory tests involving faces in comparison to younger adults. It has been proposed that this age-related increase in false facial recognition reflects a deficit in recollection and a corresponding increase in the use of familiarity when making memory decisions. To test this hypothesis, we examined the performance of 40 older adults and 40 younger adults on a face recognition memory paradigm involving three different types of lures with varying levels of familiarity. A robust age effect was found, with older adults demonstrating a markedly heightened false alarm rate in comparison to younger adults for "familiarized lures" that were exact repetitions of faces encountered earlier in the experiment, but outside the study list, and therefore required accurate recollection of contextual information to reject. By contrast, there were no age differences in false alarms to "conjunction lures" that recombined parts of study list faces, or to entirely new faces. Overall, the pattern of false recognition errors observed in older adults was consistent with excessive reliance on a familiarity-based response strategy. Specifically, in the absence of recollection older adults appeared to base their memory decisions on item familiarity, as evidenced by a linear increase in false alarm rates with increasing familiarity of the lures. These findings support the notion that automatic memory processes such as familiarity remain invariant with age, while more controlled memory processes such as recollection show age-related decline.

  12. Food-Induced Emotional Resonance Improves Emotion Recognition.

    PubMed

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce-which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one.

  13. Food-Induced Emotional Resonance Improves Emotion Recognition

    PubMed Central

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce—which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one. PMID:27973559

  14. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2004-12-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  15. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2005-01-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  16. Selective attention to emotional cues and emotion recognition in healthy subjects: the role of mineralocorticoid receptor stimulation.

    PubMed

    Schultebraucks, Katharina; Deuter, Christian E; Duesenberg, Moritz; Schulze, Lars; Hellmann-Regen, Julian; Domke, Antonia; Lockenvitz, Lisa; Kuehl, Linn K; Otte, Christian; Wingenfeld, Katja

    2016-09-01

    Selective attention toward emotional cues and emotion recognition of facial expressions are important aspects of social cognition. Stress modulates social cognition through cortisol, which acts on glucocorticoid (GR) and mineralocorticoid receptors (MR) in the brain. We examined the role of MR activation on attentional bias toward emotional cues and on emotion recognition. We included 40 healthy young women and 40 healthy young men (mean age 23.9 ± 3.3), who either received 0.4 mg of the MR agonist fludrocortisone or placebo. A dot-probe paradigm was used to test for attentional biases toward emotional cues (happy and sad faces). Moreover, we used a facial emotion recognition task to investigate the ability to recognize emotional valence (anger and sadness) from facial expression in four graded categories of emotional intensity (20, 30, 40, and 80 %). In the emotional dot-probe task, we found a main effect of treatment and a treatment × valence interaction. Post hoc analyses revealed an attentional bias away from sad faces after placebo intake and a shift in selective attention toward sad faces compared to placebo. We found no attentional bias toward happy faces after fludrocortisone or placebo intake. In the facial emotion recognition task, there was no main effect of treatment. MR stimulation seems to be important in modulating quick, automatic emotional processing, i.e., a shift in selective attention toward negative emotional cues. Our results confirm and extend previous findings of MR function. However, we did not find an effect of MR stimulation on emotion recognition.

  17. Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia

    ERIC Educational Resources Information Center

    Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie

    2012-01-01

    Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…

  18. Recognition of facial emotion and affective prosody in children with ASD (+ADHD) and their unaffected siblings.

    PubMed

    Oerlemans, Anoek M; van der Meer, Jolanda M J; van Steijn, Daphne J; de Ruiter, Saskia W; de Bruijn, Yvette G E; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J

    2014-05-01

    Autism is a highly heritable and clinically heterogeneous neuropsychiatric disorder that frequently co-occurs with other psychopathologies, such as attention-deficit/hyperactivity disorder (ADHD). An approach to parse heterogeneity is by forming more homogeneous subgroups of autism spectrum disorder (ASD) patients based on their underlying, heritable cognitive vulnerabilities (endophenotypes). Emotion recognition is a likely endophenotypic candidate for ASD and possibly for ADHD. Therefore, this study aimed to examine whether emotion recognition is a viable endophenotypic candidate for ASD and to assess the impact of comorbid ADHD in this context. A total of 90 children with ASD (43 with and 47 without ADHD), 79 ASD unaffected siblings, and 139 controls aged 6-13 years, were included to test recognition of facial emotion and affective prosody. Our results revealed that the recognition of both facial emotion and affective prosody was impaired in children with ASD and aggravated by the presence of ADHD. The latter could only be partly explained by typical ADHD cognitive deficits, such as inhibitory and attentional problems. The performance of unaffected siblings could overall be considered at an intermediate level, performing somewhat worse than the controls and better than the ASD probands. Our findings suggest that emotion recognition might be a viable endophenotype in ASD and a fruitful target in future family studies of the genetic contribution to ASD and comorbid ADHD. Furthermore, our results suggest that children with comorbid ASD and ADHD are at highest risk for emotion recognition problems.

  19. Facial affect recognition in early and late-stage schizophrenia patients.

    PubMed

    Romero-Ferreiro, María Verónica; Aguado, Luis; Rodriguez-Torresano, Javier; Palomo, Tomás; Rodriguez-Jimenez, Roberto; Pedreira-Massa, José Luis

    2016-04-01

    Prior studies have shown deficits in social cognition and emotion perception in first-episode psychosis (FEP) and multi-episode schizophrenia (MES) patients. These studies compared patients at different stages of the illness with only a single control group which differed in age from at least one clinical group. The present study provides new evidence of a differential pattern of deficit in facial affect recognition in FEP and MES patients using a double age-matched control design. Compared to their controls, FEP patients only showed impaired recognition of fearful faces (p=.007). In contrast to this, the MES patients showed a more generalized deficit compared to their age-matched controls, with impaired recognition of angry, sad and fearful faces (ps<.01) and an increased misattribution of emotional meaning to neutral faces. PANSS scores of FEP patients on Depressed factor correlated positively with the accuracy to recognize fearful expressions (r=.473). For the MES group fear recognition correlated positively with negative PANSS factor (r=.498) and recognition of sad and neutral expressions was inversely correlated with disorganized PANSS factor (r=-.461 and r=-.541, respectively). These results provide evidence that a generalized impairment of affect recognition is observed in advanced-stage patients and is not characteristic of the early stages of schizophrenia. Moreover, the finding that anomalous attribution of emotional meaning to neutral faces is observed only in MES patients suggests that an increased attribution of salience to social stimuli is a characteristic of social cognition in advanced stages of the disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Facial emotion recognition in alcohol and substance use disorders: A meta-analysis.

    PubMed

    Castellano, Filippo; Bartoli, Francesco; Crocamo, Cristina; Gamba, Giulia; Tremolada, Martina; Santambrogio, Jacopo; Clerici, Massimo; Carrà, Giuseppe

    2015-12-01

    People with alcohol and substance use disorders (AUDs/SUDs) show worse facial emotion recognition (FER) than controls, though magnitude and potential moderators remain unknown. The aim of this meta-analysis was to estimate the association between AUDs, SUDs and FER impairment. Electronic databases were searched through April 2015. Pooled analyses were based on standardized mean differences between index and control groups with 95% confidence intervals, weighting each study with random effects inverse variance models. Risk of publication bias and role of potential moderators, including task type, were explored. Nineteen of 70 studies assessed for eligibility met the inclusion criteria, comprising 1352 individuals, of whom 714 (53%) had AUDs or SUDs. The association between substance related disorders and FER performance showed an effect size of -0.67 (-0.95, -0.39), and -0.65 (-0.93, -0.37) for AUDs and SUDs, respectively. There was no publication bias and subgroup and sensitivity analyses based on potential moderators confirmed core results. Future longitudinal research should confirm these findings, clarifying the role of specific clinical issues of AUDs and SUDs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature

    PubMed Central

    Tender, Jennifer A.F.; Ferreira, Carlos R.

    2018-01-01

    BACKGROUND: Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. OBJECTIVE: To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. METHODS: We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. CONCLUSION: The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls. PMID:29682451

  2. Schematic drawings of facial expressions for emotion recognition and interpretation by preschool-aged children.

    PubMed

    MacDonald, P M; Kirkpatrick, S W; Sullivan, L A

    1996-11-01

    Schematic drawings of facial expressions were evaluated as a possible assessment tool for research on emotion recognition and interpretation involving young children. A subset of Ekman and Friesen's (1976) Pictures of Facial Affect was used as the standard for comparison. Preschool children (N = 138) were shown drawing and photographs in two context conditions for six emotions (anger, disgust, fear, happiness, sadness, and surprise). The overall correlation between accuracy for the photographs and drawings was .677. A significant difference was found for the stimulus condition (photographs vs. drawings) but not for the administration condition (label-based vs. context-based). Children were significantly more accurate in interpreting drawings than photographs and tended to be more accurate in identifying facial expressions in the label-based administration condition for both photographs and drawings than in the context-based administration condition.

  3. Recognition of facial, auditory, and bodily emotions in older adults.

    PubMed

    Ruffman, Ted; Halberstadt, Jamin; Murray, Janice

    2009-11-01

    Understanding older adults' social functioning difficulties requires insight into their recognition of emotion processing in voices and bodies, not just faces, the focus of most prior research. We examined 60 young and 61 older adults' recognition of basic emotions in facial, vocal, and bodily expressions, and when matching faces and bodies to voices, using 120 emotion items. Older adults were worse than young adults in 17 of 30 comparisons, with consistent difficulties in recognizing both positive (happy) and negative (angry and sad) vocal and bodily expressions. Nearly three quarters of older adults functioned at a level similar to the lowest one fourth of young adults, suggesting that age-related changes are common. In addition, we found that older adults' difficulty in matching emotions was not explained by difficulty on the component sources (i.e., faces or voices on their own), suggesting an additional problem of integration.

  4. Recognition of face and non-face stimuli in autistic spectrum disorder.

    PubMed

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  5. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    ERIC Educational Resources Information Center

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  6. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  7. Improved facial affect recognition in schizophrenia following an emotion intervention, but not training attention-to-facial-features or treatment-as-usual.

    PubMed

    Tsotsi, Stella; Kosmidis, Mary H; Bozikas, Vasilis P

    2017-08-01

    In schizophrenia, impaired facial affect recognition (FAR) has been associated with patients' overall social functioning. Interventions targeting attention or FAR per se have invariably yielded improved FAR performance in these patients. Here, we compared the effects of two interventions, one targeting FAR and one targeting attention-to-facial-features, with treatment-as-usual on patients' FAR performance. Thirty-nine outpatients with schizophrenia were randomly assigned to one of three groups: FAR intervention (training to recognize emotional information, conveyed by changes in facial features), attention-to-facial-features intervention (training to detect changes in facial features), and treatment-as-usual. Also, 24 healthy controls, matched for age and education, were assigned to one of the two interventions. Two FAR measurements, baseline and post-intervention, were conducted using an original experimental procedure with alternative sets of stimuli. We found improved FAR performance following the intervention targeting FAR in comparison to the other patient groups, which in fact was comparable to the pre-intervention performance of healthy controls in the corresponding intervention group. This improvement was more pronounced in recognizing fear. Our findings suggest that compared to interventions targeting attention, and treatment-as-usual, training programs targeting FAR can be more effective in improving FAR in patients with schizophrenia, particularly assisting them in perceiving threat-related information more accurately. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  8. Reverse control for humanoid robot task recognition.

    PubMed

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  9. In the face of emotions: event-related potentials in supraliminal and subliminal facial expression recognition.

    PubMed

    Balconi, Michela; Lucchiari, Claudio

    2005-02-01

    Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.

  10. Task-Dependent Masked Priming Effects in Visual Word Recognition

    PubMed Central

    Kinoshita, Sachiko; Norris, Dennis

    2012-01-01

    A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316

  11. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    ERIC Educational Resources Information Center

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  12. Callous-unemotional traits and empathy deficits: Mediating effects of affective perspective-taking and facial emotion recognition.

    PubMed

    Lui, Joyce H L; Barry, Christopher T; Sacco, Donald F

    2016-09-01

    Although empathy deficits are thought to be associated with callous-unemotional (CU) traits, findings remain equivocal and little is known about what specific abilities may underlie these purported deficits. Affective perspective-taking (APT) and facial emotion recognition may be implicated, given their independent associations with both empathy and CU traits. The current study examined how CU traits relate to cognitive and affective empathy and whether APT and facial emotion recognition mediate these relations. Participants were 103 adolescents (70 males) aged 16-18 attending a residential programme. CU traits were negatively associated with cognitive and affective empathy to a similar degree. The association between CU traits and affective empathy was partially mediated by APT. Results suggest that assessing mechanisms that may underlie empathic deficits, such as perspective-taking, may be important for youth with CU traits and may inform targets of intervention.

  13. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  14. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    DTIC Science & Technology

    2010-04-01

    County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame

  15. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    PubMed

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  16. Emotional availability, understanding emotions, and recognition of facial emotions in obese mothers with young children.

    PubMed

    Bergmann, Sarah; von Klitzing, Kai; Keitel-Korndörfer, Anja; Wendt, Verena; Grube, Matthias; Herpertz, Sarah; Schütz, Astrid; Klein, Annette M

    2016-01-01

    Recent research has identified mother-child relationships of low quality as possible risk factors for childhood obesity. However, it remains open how mothers' own obesity influences the quality of mother-child interaction, and particularly emotional availability (EA). Also unclear is the influence of maternal emotional competencies, i.e. understanding emotions and recognizing facial emotions. This study aimed to (1) investigate differences between obese and normal-weight mothers regarding mother-child EA, maternal understanding emotions and recognition of facial emotions, and (2) explore how maternal emotional competencies and maternal weight interact with each other in predicting EA. A better understanding of these associations could inform strategies of obesity prevention especially in children at risk. We assessed EA, understanding emotions and recognition of facial emotions in 73 obese versus 73 normal-weight mothers, and their children aged 6 to 47 months (Mchild age=24.49, 80 females). Obese mothers showed lower EA and understanding emotions. Mothers' normal weight and their ability to understand emotions were positively associated with EA. The ability to recognize facial emotions was positively associated with EA in obese but not in normal-weight mothers. Maternal weight status indirectly influenced EA through its effect on understanding emotions. Maternal emotional competencies may play an important role for establishing high EA in interaction with the child. Children of obese mothers experience lower EA, which may contribute to overweight development. We suggest including elements that aim to improve maternal emotional competencies and mother-child EA in prevention or intervention programmes targeting childhood obesity. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Task-dependent enhancement of facial expression and identity representations in human cortex.

    PubMed

    Dobs, Katharina; Schultz, Johannes; Bülthoff, Isabelle; Gardner, Justin L

    2018-05-15

    What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  18. Toward End-to-End Face Recognition Through Alignment Learning

    NASA Astrophysics Data System (ADS)

    Zhong, Yuanyi; Chen, Jiansheng; Huang, Bo

    2017-08-01

    Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08\\% which is comparable to state-of-the-art single model based methods.

  19. Neural Correlates of Explicit versus Implicit Facial Emotion Processing in ASD

    ERIC Educational Resources Information Center

    Luckhardt, Christina; Kröger, Anne; Cholemkery, Hannah; Bender, Stephan; Freitag, Christine M.

    2017-01-01

    The underlying neural mechanisms of implicit and explicit facial emotion recognition (FER) were studied in children and adolescents with autism spectrum disorder (ASD) compared to matched typically developing controls (TDC). EEG was obtained from N = 21 ASD and N = 16 TDC. Task performance, visual (P100, N170) and cognitive (late positive…

  20. Behavioral and Neuroimaging Evidence for Facial Emotion Recognition in Elderly Korean Adults with Mild Cognitive Impairment, Alzheimer’s Disease, and Frontotemporal Dementia

    PubMed Central

    Park, Soowon; Kim, Taehoon; Shin, Seong A; Kim, Yu Kyeong; Sohn, Bo Kyung; Park, Hyeon-Ju; Youn, Jung-Hae; Lee, Jun-Young

    2017-01-01

    Background: Facial emotion recognition (FER) is impaired in individuals with frontotemporal dementia (FTD) and Alzheimer’s disease (AD) when compared to healthy older adults. Since deficits in emotion recognition are closely related to caregiver burden or social interactions, researchers have fundamental interest in FER performance in patients with dementia. Purpose: The purpose of this study was to identify the performance profiles of six facial emotions (i.e., fear, anger, disgust, sadness, surprise, and happiness) and neutral faces measured among Korean healthy control (HCs), and those with mild cognitive impairment (MCI), AD, and FTD. Additionally, the neuroanatomical correlates of facial emotions were investigated. Methods: A total of 110 (33 HC, 32 MCI, 32 AD, 13 FTD) older adult participants were recruited from two different medical centers in metropolitan areas of South Korea. These individuals underwent an FER test that was used to assess the recognition of emotions or absence of emotion (neutral) in 35 facial stimuli. Repeated measures two-way analyses of variance were used to examine the distinct profiles of emotional recognition among the four groups. We also performed brain imaging and voxel-based morphometry (VBM) on the participants to examine the associations between FER scores and gray matter volume. Results: The mean score of negative emotion recognition (i.e., fear, anger, disgust, and sadness) clearly discriminated FTD participants from individuals with MCI and AD and HC [F(3,106) = 10.829, p < 0.001, η2 = 0.235], whereas the mean score of positive emotion recognition (i.e., surprise and happiness) did not. A VBM analysis showed negative emotions were correlated with gray matter volume of anterior temporal regions, whereas positive emotions were related to gray matter volume of fronto-parietal regions. Conclusion: Impairment of negative FER in patients with FTD is cross-cultural. The discrete neural correlates of FER indicate that emotional

  1. Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences

    PubMed Central

    Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P.; Wenderoth, Nicole

    2011-01-01

    Folk psychology advocates the existence of gender differences in socio-cognitive functions such as ‘reading’ the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of ‘biological motion’ versus ‘non-biological’ (or ‘scrambled’ motion); or (ii) the recognition of the ‘emotional state’ of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the ‘Reading the Mind in the Eyes Test’ (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may

  2. Communication Skills Training Exploiting Multimodal Emotion Recognition

    ERIC Educational Resources Information Center

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2017-01-01

    The teaching of communication skills is a labour-intensive task because of the detailed feedback that should be given to learners during their prolonged practice. This study investigates to what extent our FILTWAM facial and vocal emotion recognition software can be used for improving a serious game (the Communication Advisor) that delivers a…

  3. Gender differences in facial emotion recognition in persons with chronic schizophrenia.

    PubMed

    Weiss, Elisabeth M; Kohler, Christian G; Brensinger, Colleen M; Bilker, Warren B; Loughead, James; Delazer, Margarete; Nolan, Karen A

    2007-03-01

    The aim of the present study was to investigate possible sex differences in the recognition of facial expressions of emotion and to investigate the pattern of classification errors in schizophrenic males and females. Such an approach provides an opportunity to inspect the degree to which males and females differ in perceiving and interpreting the different emotions displayed to them and to analyze which emotions are most susceptible to recognition errors. Fifty six chronically hospitalized schizophrenic patients (38 men and 18 women) completed the Penn Emotion Recognition Test (ER40), a computerized emotion discrimination test presenting 40 color photographs of evoked happy, sad, anger, fear expressions and neutral expressions balanced for poser gender and ethnicity. We found a significant sex difference in the patterns of error rates in the Penn Emotion Recognition Test. Neutral faces were more commonly mistaken as angry in schizophrenic men, whereas schizophrenic women misinterpreted neutral faces more frequently as sad. Moreover, female faces were better recognized overall, but fear was better recognized in same gender photographs, whereas anger was better recognized in different gender photographs. The findings of the present study lend support to the notion that sex differences in aggressive behavior could be related to a cognitive style characterized by hostile attributions to neutral faces in schizophrenic men.

  4. Sleep deprivation impairs the accurate recognition of human emotions.

    PubMed

    van der Helm, Els; Gujar, Ninad; Walker, Matthew P

    2010-03-01

    Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Experimental laboratory study. Thirty-seven healthy participants, (21 females) aged 18-25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues.

  5. Face Recognition Vendor Test 2000: Evaluation Report

    DTIC Science & Technology

    2001-02-16

    The biggest change in the facial recognition community since the completion of the FERET program has been the introduction of facial recognition products...program and significantly lowered system costs. Today there are dozens of facial recognition systems available that have the potential to meet...inquiries from numerous government agencies on the current state of facial recognition technology prompted the DoD Counterdrug Technology Development Program

  6. The face of fear and anger: Facial width-to-height ratio biases recognition of angry and fearful expressions.

    PubMed

    Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt

    2018-04-01

    The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    PubMed

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    PubMed

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks.

  9. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    PubMed Central

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally

  10. Biometric correspondence between reface computerized facial approximations and CT-derived ground truth skin surface models objectively examined using an automated facial recognition system.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-05-01

    This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p<0.01) increases in identification rates were observed between R 10 and R 25 (8% mean increase) and R 25 and R 50 (3% mean increase). No significant (p>0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.

  11. Recognition of facial emotion and perceived parental bonding styles in healthy volunteers and personality disorder patients.

    PubMed

    Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei

    2011-12-01

    Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  12. Recognition of schematic facial displays of emotion in parents of children with autism.

    PubMed

    Palermo, Mark T; Pasqualetti, Patrizio; Barbati, Giulia; Intelligente, Fabio; Rossini, Paolo Maria

    2006-07-01

    Performance on an emotional labeling task in response to schematic facial patterns representing five basic emotions without the concurrent presentation of a verbal category was investigated in 40 parents of children with autism and 40 matched controls. 'Autism fathers' performed worse than 'autism mothers', who performed worse than controls in decoding displays representing sadness or disgust. This indicates the need to include facial expression decoding tasks in genetic research of autism. In addition, emotional expression interactions between parents and their children with autism, particularly through play, where affect and prosody are 'physiologically' exaggerated, may stimulate development of social competence. Future studies could benefit from a combination of stimuli including photographs and schematic drawings, with and without associated verbal categories. This may allow the subdivision of patients and relatives on the basis of the amount of information needed to understand and process social-emotionally relevant information.

  13. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Task-oriented situation recognition

    NASA Astrophysics Data System (ADS)

    Bauer, Alexander; Fischer, Yvonne

    2010-04-01

    From the advances in computer vision methods for the detection, tracking and recognition of objects in video streams, new opportunities for video surveillance arise: In the future, automated video surveillance systems will be able to detect critical situations early enough to enable an operator to take preventive actions, instead of using video material merely for forensic investigations. However, problems such as limited computational resources, privacy regulations and a constant change in potential threads have to be addressed by a practical automated video surveillance system. In this paper, we show how these problems can be addressed using a task-oriented approach. The system architecture of the task-oriented video surveillance system NEST and an algorithm for the detection of abnormal behavior as part of the system are presented and illustrated for the surveillance of guests inside a video-monitored building.

  15. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  16. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  17. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  18. Automatically Log Off Upon Disappearance of Facial Image

    DTIC Science & Technology

    2005-03-01

    log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only

  19. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  20. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  1. Facial expression recognition under partial occlusion based on fusion of global and local features

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  2. Deficits in Facial Expression Recognition in Male Adolescents with Early-Onset or Adolescence-Onset Conduct Disorder

    ERIC Educational Resources Information Center

    Fairchild, Graeme; Van Goozen, Stephanie H. M.; Calder, Andrew J.; Stollery, Sarah J.; Goodyer, Ian M.

    2009-01-01

    Background: We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that…

  3. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  4. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    PubMed

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    PubMed Central

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  6. Externalizing and Internalizing Symptoms Moderate Longitudinal Patterns of Facial Emotion Recognition in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Rosen, Tamara E.; Lerner, Matthew D.

    2016-01-01

    Facial emotion recognition (FER) is thought to be a key deficit domain in autism spectrum disorder (ASD). However, the extant literature is based solely on cross-sectional studies; thus, little is known about even short-term intra-individual dynamics of FER in ASD over time. The present study sought to examine trajectories of FER in ASD youth over…

  7. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults

    PubMed Central

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-01-01

    Objectives Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report their symptoms started in childhood, suggesting BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition in both children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Methods Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7–26 years) and HC participants (n = 87; ages 7–25 years). Complementary analyses investigated errors for child and adult faces. Results A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred for both child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Conclusions Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target, i.e., for cognitive remediation to improve BD youths’ emotion recognition abilities. PMID:25951752

  8. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  9. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.

  10. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  11. The impact of task demand on visual word recognition.

    PubMed

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Relative judgment theory and the mediation of facial recognition: Implications for theories of eyewitness identification.

    PubMed

    McAdoo, Ryan M; Gronlund, Scott D

    2016-01-01

    Many in the eyewitness identification community believe that sequential lineups are superior to simultaneous lineups because simultaneous lineups encourage inappropriate choosing due to promoting comparisons among choices (a relative judgment strategy), but sequential lineups reduce this propensity by inducing comparisons of lineup members directly to memory rather than to each other (an absolute judgment strategy). Different versions of the relative judgment theory have implicated both discrete-state and continuous mediation of eyewitness decisions. The theory has never been formally specified, but (Yonelinas, J Exp Psychol Learn Mem Cogn 20:1341-1354, 1994) dual-process models provide one possible specification, thereby allowing us to evaluate how eyewitness decisions are mediated. We utilized a ranking task (Kellen and Klauer, J Exp Psychol Learn Mem Cogn 40:1795-1804, 2014) and found evidence for continuous mediation when facial stimuli match from study to test (Experiment 1) and when they mismatch (Experiment 2). This evidence, which is contrary to a version of relative judgment theory that has gained a lot of traction in the legal community, compels reassessment of the role that guessing plays in eyewitness identification. Future research should continue to test formal explanations in order to advance theory, expedite the development of new procedures that can enhance the reliability of eyewitness evidence, and to facilitate the exploration of task factors and emergent strategies that might influence when recognition is continuously or discretely mediated.

  13. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  14. Social functioning and facial expression recognition in children with neurofibromatosis type 1.

    PubMed

    Allen, T; Willard, V W; Anderson, L M; Hardy, K K; Bonner, M J

    2016-03-01

    This study examined social functioning and facial expression recognition (FER) in children with neurofibromatosis type 1 (NF1) compared to typically developing peers. Specifically, the current research aimed to identify hypothesised relationships between neurocognitive ability, FER and social functioning. Children, ages 8 to 16, with NF1 (n = 23) and typically developing peers (n = 23) were recruited during regularly scheduled clinic visits and through advertisements on an institutional clinical trials website, respectively. Participants completed a measure of FER, an abbreviated intelligence test and questionnaires regarding their quality of life and behavioural functioning. Parents were also asked to complete questionnaires regarding the social-emotional and cognitive functioning of their child. As expected, there were significant differences between children with NF1 and typically developing peers across domains of social functioning and FER. Within the sample of children with NF1, there were no significant associations observed between cognitive measures, social functioning and facial recognition skills. Children with NF1 exhibited high rates of social impairment and weak FER skills compared to controls. The absence of associations between FER with cognitive and social variables, however, suggests something unique about this skill in children with NF1. Theoretical comparisons are made to children with autism spectrum disorders, as this condition may serve as a potentially useful model in better understanding FER in children with NF1. © 2016 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  15. Facial and semantic emotional interference: A pilot study on the behavioral and cortical responses to the dual valence association task

    PubMed Central

    2011-01-01

    Background Integration of compatible or incompatible emotional valence and semantic information is an essential aspect of complex social interactions. A modified version of the Implicit Association Test (IAT) called Dual Valence Association Task (DVAT) was designed in order to measure conflict resolution processing from compatibility/incompatibly of semantic and facial valence. The DVAT involves two emotional valence evaluative tasks which elicits two forms of emotional compatible/incompatible associations (facial and semantic). Methods Behavioural measures and Event Related Potentials were recorded while participants performed the DVAT. Results Behavioural data showed a robust effect that distinguished compatible/incompatible tasks. The effects of valence and contextual association (between facial and semantic stimuli) showed early discrimination in N170 of faces. The LPP component was modulated by the compatibility of the DVAT. Conclusions Results suggest that DVAT is a robust paradigm for studying the emotional interference effect in the processing of simultaneous information from semantic and facial stimuli. PMID:21489277

  16. Transfer-Appropriate Processing in Recognition Memory: Perceptual and Conceptual Effects on Recognition Memory Depend on Task Demands

    ERIC Educational Resources Information Center

    Parks, Colleen M.

    2013-01-01

    Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which…

  17. Multi-task learning with group information for human action recognition

    NASA Astrophysics Data System (ADS)

    Qian, Li; Wu, Song; Pu, Nan; Xu, Shulin; Xiao, Guoqiang

    2018-04-01

    Human action recognition is an important and challenging task in computer vision research, due to the variations in human motion performance, interpersonal differences and recording settings. In this paper, we propose a novel multi-task learning framework with group information (MTL-GI) for accurate and efficient human action recognition. Specifically, we firstly obtain group information through calculating the mutual information according to the latent relationship between Gaussian components and action categories, and clustering similar action categories into the same group by affinity propagation clustering. Additionally, in order to explore the relationships of related tasks, we incorporate group information into multi-task learning. Experimental results evaluated on two popular benchmarks (UCF50 and HMDB51 datasets) demonstrate the superiority of our proposed MTL-GI framework.

  18. Facial emotion recognition in adolescents with psychotic-like experiences: a school-based sample from the general population.

    PubMed

    Roddy, S; Tiedt, L; Kelleher, I; Clarke, M C; Murphy, J; Rawdon, C; Roche, R A P; Calkins, M E; Richard, J A; Kohler, C G; Cannon, M

    2012-10-01

    Psychotic symptoms, also termed psychotic-like experiences (PLEs) in the absence of psychotic disorder, are common in adolescents and are associated with increased risk of schizophrenia-spectrum illness in adulthood. At the same time, schizophrenia is associated with deficits in social cognition, with deficits particularly documented in facial emotion recognition (FER). However, little is known about the relationship between PLEs and FER abilities, with only one previous prospective study examining the association between these abilities in childhood and reported PLEs in adolescence. The current study was a cross-sectional investigation of the association between PLEs and FER in a sample of Irish adolescents. The Adolescent Psychotic-Like Symptom Screener (APSS), a self-report measure of PLEs, and the Penn Emotion Recognition-40 Test (Penn ER-40), a measure of facial emotion recognition, were completed by 793 children aged 10-13 years. Children who reported PLEs performed significantly more poorly on FER (β=-0.03, p=0.035). Recognition of sad faces was the major driver of effects, with children performing particularly poorly when identifying this expression (β=-0.08, p=0.032). The current findings show that PLEs are associated with poorer FER. Further work is needed to elucidate causal relationships with implications for the design of future interventions for those at risk of developing psychosis.

  19. Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions.

    PubMed

    Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori

    2013-11-01

    This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.

  20. Facial-affect recognition deficit as a predictor of different aspects of social-communication impairment in traumatic brain injury.

    PubMed

    Rigon, Arianna; Turkstra, Lyn S; Mutlu, Bilge; Duff, Melissa C

    2018-05-01

    To examine the relationship between facial-affect recognition and different aspects of self- and proxy-reported social-communication impairment following moderate-severe traumatic brain injury (TBI). Forty-six adults with chronic TBI (>6 months postinjury) and 42 healthy comparison (HC) adults were administered the La Trobe Communication Questionnaire (LCQ) Self and Other forms to assess different aspects of communication competence and the Emotion Recognition Test (ERT) to measure their ability to recognize facial affects. Individuals with TBI underperformed HC adults in the ERT and self-reported, as well as were reported by close others, as having more communication problems than did HC adults. TBI group ERT scores were significantly and negatively correlated with LCQ-Other (but not LCQ-Self) scores (i.e., participants with lower emotion-recognition scores were rated by close others as having more communication problems). Multivariate regression analysis revealed that adults with higher ERT scores self-reported more problems with disinhibition-impulsivity and partner sensitivity and had fewer other-reported problems with disinhibition-impulsivity and conversational effectiveness. Our findings support growing evidence that emotion-recognition deficits play a role in specific aspects of social-communication outcomes after TBI and should be considered in treatment planning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Recognition of Schematic Facial Displays of Emotion in Parents of Children with Autism

    ERIC Educational Resources Information Center

    Palermo, Mark T.; Pasqualetti, Patrizio; Barbati, Giulia; Intelligente, Fabio; Rossini, Paolo Maria

    2006-01-01

    Performance on an emotional labeling task in response to schematic facial patterns representing five basic emotions without the concurrent presentation of a verbal category was investigated in 40 parents of children with autism and 40 matched controls. "Autism fathers" performed worse than "autism mothers," who performed worse than controls in…

  2. Computerized spatial delayed recognition span task: a specific tool to assess visuospatial working memory.

    PubMed

    Satler, Corina; Belham, Flávia Schechtman; Garcia, Ana; Tomaz, Carlos; Tavares, Maria Clotilde H

    2015-01-01

    A new tablet device version (IOS platform) of the Spatial Delayed Recognition Span Task (SDRST) was developed with the aim of investigating visuospatial Working Memory (WM) abilities based on touchscreen technology. This new WM testing application will be available to download for free in Apple Store app ("SDRST app"). In order to verify the feasibility of this computer-based task, we conducted three experiments with different manipulations and groups of participants. We were interested in investigating if (1) the SDRST is sensitive enough to tap into cognitive differences brought by aging and dementia; (2) different experimental manipulations work successfully; (3) cortical brain activations seen in other WM tasks are also demonstrated here; and (4) non-human primates are able to answer the task. Performance (scores and response time) was better for young than older adults and higher for the latter when compared to Alzheimer's disease (AD) patients. All groups performed better with facial stimuli than with images of scenes and with emotional than with neutral stimuli. Electrophysiology data showed activation on prefrontal and frontal areas of scalp, theta band activity on the midline area, and gamma activity in left temporal area. There are all scalp regions known to be related to attention and WM. Besides those data, our sample of adult captive capuchin monkeys (Sapajus libidinosus) answered the task above chance level. Taken together, these results corroborate the reliability of this new computer-based SDRST as a measure of visuospatial WM in clinical and non-clinical populations as well as in non-human primates. Its tablet app allows the task to be administered in a wide range of settings, including hospitals, homes, schools, laboratories, universities, and research institutions.

  3. Facial expression perception correlates with verbal working memory function in schizophrenia.

    PubMed

    Hagiya, Kumiko; Sumiyoshi, Tomiki; Kanie, Ayako; Pu, Shenghong; Kaneko, Koichi; Mogami, Tamiko; Oshima, Sachie; Niwa, Shin-ichi; Inagaki, Akiko; Ikebuchi, Emi; Kikuchi, Akiko; Yamasaki, Syudo; Iwata, Kazuhiko; Nakagome, Kazuyuki

    2015-12-01

    Facial emotion perception is considered to provide a measure of social cognition. Numerous studies have examined the perception of emotion in patients with schizophrenia, and the majority has reported impaired ability to recognize facial emotion perception. We aimed to investigate the correlation between facial expression recognition and other domains of social cognition and neurocognition in Japanese patients with schizophrenia. Participants were 52 patients with schizophrenia and 53 normal controls with no history of psychiatric diseases. All participants completed the Hinting Task and the Social Cognition Screening Questionnaire. The Brief Assessment of Cognition in Schizophrenia was administered only to the patients. Facial emotion perception measured by the Facial Emotion Selection Test (FEST) was compared between the patients and normal controls. Patients performed significantly worse on the FEST compared to normal control subjects. The FEST total score was significantly positively correlated with scores of the Brief Assessment of Cognition in Schizophrenia attention subscale, Hinting Task, Social Cognition Screening Questionnaire Verbal Working Memory and Metacognition subscales. Stepwise multiple regression analysis revealed that verbal working memory function was positively related to the facial emotion perception ability in patients with schizophrenia. These results point to the concept that facial emotion perception and some types of working memory use common cognitive resources. Our findings may provide implications for cognitive rehabilitation and related interventions in schizophrenia. © 2015 The Authors. Psychiatry and Clinical Neurosciences © 2015 Japanese Society of Psychiatry and Neurology.

  4. Configural and Featural Face Processing Influences on Emotion Recognition in Schizophrenia and Bipolar Disorder.

    PubMed

    Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L

    2017-03-01

    Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).

  5. The impact of the stimulus features and task instructions on facial processing in social anxiety: an ERP investigation.

    PubMed

    Peschard, Virginie; Philippot, Pierre; Joassin, Frédéric; Rossignol, Mandy

    2013-04-01

    Social anxiety has been characterized by an attentional bias towards threatening faces. Electrophysiological studies have demonstrated modulations of cognitive processing from 100 ms after stimulus presentation. However, the impact of the stimulus features and task instructions on facial processing remains unclear. Event-related potentials were recorded while high and low socially anxious individuals performed an adapted Stroop paradigm that included a colour-naming task with non-emotional stimuli, an emotion-naming task (the explicit task) and a colour-naming task (the implicit task) on happy, angry and neutral faces. Whereas the impact of task factors was examined by contrasting an explicit and an implicit emotional task, the effects of perceptual changes on facial processing were explored by including upright and inverted faces. The findings showed an enhanced P1 in social anxiety during the three tasks, without a moderating effect of the type of task or stimulus. These results suggest a global modulation of attentional processing in performance situations. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.

    PubMed

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.

  7. Recognition of facial expressions is moderated by Islamic cues.

    PubMed

    Kret, Mariska E; Fischer, Agneta H

    2018-05-01

    Recognising emotions from faces that are partly covered is more difficult than from fully visible faces. The focus of the present study is on the role of an Islamic versus non-Islamic context, i.e. Islamic versus non-Islamic headdress in perceiving emotions. We report an experiment that investigates whether briefly presented (40 ms) facial expressions of anger, fear, happiness and sadness are perceived differently when covered by a niqāb or turban, compared to a cap and shawl. In addition, we examined whether oxytocin, a neuropeptide regulating affection, bonding and cooperation between ingroup members and fostering outgroup vigilance and derogation, would differentially impact on emotion recognition from wearers of Islamic versus non-Islamic headdresses. The results first of all show that the recognition of happiness was more accurate when the face was covered by a Western compared to Islamic headdress. Second, participants more often incorrectly assigned sadness to a face covered by an Islamic headdress compared to a cap and shawl. Third, when correctly recognising sadness, they did so faster when the face was covered by an Islamic compared to Western headdress. Fourth, oxytocin did not modulate any of these effects. Implications for theorising about the role of group membership on emotion perception are discussed.

  8. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    PubMed

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. The perception and identification of facial emotions in individuals with autism spectrum disorders using the Let's Face It! Emotion Skills Battery.

    PubMed

    Tanaka, James W; Wolf, Julie M; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin S; South, Mikle; McPartland, James C; Kaiser, Martha D; Schultz, Robert T

    2012-12-01

    Although impaired social-emotional ability is a hallmark of autism spectrum disorder (ASD), the perceptual skills and mediating strategies contributing to the social deficits of autism are not well understood. A perceptual skill that is fundamental to effective social communication is the ability to accurately perceive and interpret facial emotions. To evaluate the expression processing of participants with ASD, we designed the Let's Face It! Emotion Skills Battery (LFI! Battery), a computer-based assessment composed of three subscales measuring verbal and perceptual skills implicated in the recognition of facial emotions. We administered the LFI! Battery to groups of participants with ASD and typically developing control (TDC) participants that were matched for age and IQ. On the Name Game labeling task, participants with ASD (N = 68) performed on par with TDC individuals (N = 66) in their ability to name the facial emotions of happy, sad, disgust and surprise and were only impaired in their ability to identify the angry expression. On the Matchmaker Expression task that measures the recognition of facial emotions across different facial identities, the ASD participants (N = 66) performed reliably worse than TDC participants (N = 67) on the emotions of happy, sad, disgust, frighten and angry. In the Parts-Wholes test of perceptual strategies of expression, the TDC participants (N = 67) displayed more holistic encoding for the eyes than the mouths in expressive faces whereas ASD participants (N = 66) exhibited the reverse pattern of holistic recognition for the mouth and analytic recognition of the eyes. In summary, findings from the LFI! Battery show that participants with ASD were able to label the basic facial emotions (with the exception of angry expression) on par with age- and IQ-matched TDC participants. However, participants with ASD were impaired in their ability to generalize facial emotions across different identities and showed a tendency to recognize

  10. The role of spatial frequency information in the decoding of facial expressions of pain: a novel hybrid task.

    PubMed

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2017-11-01

    Spatial frequency (SF) information contributes to the recognition of facial expressions, including pain. Low-SF encodes facial configuration and structure and often dominates over high-SF information, which encodes fine details in facial features. This low-SF preference has not been investigated within the context of pain. In this study, we investigated whether perpetual preference differences exist for low-SF and high-SF pain information. A novel hybrid expression paradigm was used in which 2 different expressions, one containing low-SF information and the other high-SF information, were combined in a facial hybrid. Participants are instructed to identify the core expression contained within the hybrid, allowing for the measurement of SF information preference. Three experiments were conducted (46 participants in each) that varied the expressions within the hybrid faces: respectively pain-neutral, pain-fear, and pain-happiness. In order to measure the temporal aspects of image processing, each hybrid image was presented for 33, 67, 150, and 300 ms. As expected, identification of pain and other expressions was dominated by low-SF information across the 3 experiments. The low-SF preference was largest when the presentation of hybrid faces was brief and reduced as the presentation duration increased. A sex difference was also found in experiment 1. For women, the low-SF preference was dampened by high-SF pain information, when viewing low-SF neutral expressions. These results not only confirm the role that SF information has in the recognition of pain in facial expressions but suggests that in some situations, there may be sex differences in how pain is communicated.

  11. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults.

    PubMed

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-08-01

    Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report that their symptoms started in childhood, suggesting that BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition both in children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7-26 years) and HC participants (n = 87; ages 7-25 years). Complementary analyses investigated errors for child and adult faces. A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred both for child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target - that is, for cognitive remediation to improve BD youths' emotion recognition abilities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Concurrent and prospective associations between facial affect recognition accuracy and childhood antisocial behavior.

    PubMed

    Bowen, Erica; Dixon, Louise

    2010-01-01

    This study examined the concurrent and prospective associations between children's ability to accurately recognize facial affect at age 8.5 and antisocial behavior at age 8.5 and 10.5 years in a sub sample of the Avon Longitudinal Study of Parents and Children cohort (5,396 children; 2,644, 49% males). All observed effects were small. It was found that at age 8.5 years, in contrast to nonantisocial children; antisocial children were less accurate at decoding happy and sad expressions when presented at low intensity. In addition, concurrent antisocial behavior was associated with misidentifying expressions of fear as expressions of sadness. In longitudinal analyses, children who misidentified fear as anger exhibited a decreased risk of antisocial behavior 2 years later. The study suggests that concurrent rather than future antisocial behavior is associated with facial affect recognition accuracy. (c) 2010 Wiley-Liss, Inc.

  13. Visual attention during the evaluation of facial attractiveness is influenced by facial angles and smile.

    PubMed

    Kim, Seol Hee; Hwang, Soonshin; Hong, Yeon-Ju; Kim, Jae-Jin; Kim, Kyung-Ho; Chung, Chooryung J

    2018-05-01

    To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.

  14. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  15. Face Recognition Vendor Test 2000: Appendices

    DTIC Science & Technology

    2001-02-01

    DARPA), NAVSEA Crane Division and NAVSEA Dahlgren Division are sponsoring an evaluation of commercial off the shelf (COTS) facial recognition products...The purpose of these evaluations is to accurately gauge the capabilities of facial recognition biometric systems that are currently available for...or development efforts. Participation in these tests is open to all facial recognition systems on the US commercial market. The U.S. Government will

  16. Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease

    PubMed Central

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551

  17. Fixation Patterns of Chinese Participants while Identifying Facial Expressions on Chinese Faces

    PubMed Central

    Xia, Mu; Li, Xueliu; Zhong, Haiqing; Li, Hong

    2017-01-01

    Two experiments in this study were designed to explore a model of Chinese fixation with four types of native facial expressions—happy, peaceful, sad, and angry. In both experiments, participants performed an emotion recognition task while their behaviors and eye movements were recorded. Experiment 1 (24 participants, 12 men) demonstrated that both eye fixations and durations were lower for the upper part of the face than for the lower part of the face for all four types of facial expression. Experiment 2 (20 participants, 6 men) repeated this finding and excluded the disturbance of fixation point. These results indicate that Chinese participants demonstrated a superiority effect for the lower part of face while interpreting facial expressions, possibly due to the influence of eastern etiquette culture. PMID:28446896

  18. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    PubMed Central

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  19. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    PubMed

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Does Facial Resemblance Enhance Cooperation?

    PubMed Central

    Giang, Trang; Bell, Raoul; Buchner, Axel

    2012-01-01

    Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces). A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system. PMID:23094095

  1. Transdiagnostic deviant facial recognition for implicit negative emotion in autism and schizophrenia.

    PubMed

    Ciaramidaro, Angela; Bölte, Sven; Schlitt, Sabine; Hainz, Daniela; Poustka, Fritz; Weber, Bernhard; Freitag, Christine; Walter, Henrik

    2018-02-01

    Impaired facial affect recognition (FAR) is observed in schizophrenia and autism spectrum disorder (ASD) and has been linked to amygdala and fusiform gyrus dysfunction. ASD patient's impairments seem to be more pronounced during implicit rather than explicit FAR, whereas for schizophrenia data are inconsistent. However, there are no studies comparing both patient groups in an identical design. The aim of this three-group study was to identify (i) whether FAR alterations are equally present in both groups, (ii) whether they are present rather during implicit or explicit FAR, (iii) and whether they are conveyed by similar or disorder-specific neural mechanisms. Using fMRI, we investigated neural activation during explicit and implicit negative and neutral FAR in 33 young-adult individuals with ASD, 20 subjects with paranoid-schizophrenia and 25 IQ- and gender-matched controls individuals. Differences in activation patterns between each clinical group and controls, respectively were found exclusively for implicit FAR in amygdala and fusiform gyrus. In addition, the ASD group additionally showed reduced activations in medial prefrontal cortex (PFC), bilateral dorso-lateral PFC, ventro-lateral PFC, posterior-superior temporal sulcus and left temporo-parietal junction. Although subjects with ASD showed more widespread altered activation patterns, a direct comparison between both patient groups did not show disorder-specific deficits in neither patient group. In summary, our findings are consistent with a common neural deficit during implicit negative facial affect recognition in schizophrenia and autism spectrum disorders. Copyright © 2017 Elsevier B.V. and ECNP. All rights reserved.

  2. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  3. Own- and Other-Race Face Identity Recognition in Children: The Effects of Pose and Feature Composition

    ERIC Educational Resources Information Center

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2014-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…

  4. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder

    PubMed Central

    Garman, Heather D.; Spaulding, Christine J.; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P.

    2016-01-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed. PMID:26743637

  5. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder.

    PubMed

    Garman, Heather D; Spaulding, Christine J; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P; Lerner, Matthew D

    2016-12-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed.

  6. Computerized spatial delayed recognition span task: a specific tool to assess visuospatial working memory

    PubMed Central

    Satler, Corina; Belham, Flávia Schechtman; Garcia, Ana; Tomaz, Carlos; Tavares, Maria Clotilde H.

    2015-01-01

    A new tablet device version (IOS platform) of the Spatial Delayed Recognition Span Task (SDRST) was developed with the aim of investigating visuospatial Working Memory (WM) abilities based on touchscreen technology. This new WM testing application will be available to download for free in Apple Store app (“SDRST app”). In order to verify the feasibility of this computer-based task, we conducted three experiments with different manipulations and groups of participants. We were interested in investigating if (1) the SDRST is sensitive enough to tap into cognitive differences brought by aging and dementia; (2) different experimental manipulations work successfully; (3) cortical brain activations seen in other WM tasks are also demonstrated here; and (4) non-human primates are able to answer the task. Performance (scores and response time) was better for young than older adults and higher for the latter when compared to Alzheimer’s disease (AD) patients. All groups performed better with facial stimuli than with images of scenes and with emotional than with neutral stimuli. Electrophysiology data showed activation on prefrontal and frontal areas of scalp, theta band activity on the midline area, and gamma activity in left temporal area. There are all scalp regions known to be related to attention and WM. Besides those data, our sample of adult captive capuchin monkeys (Sapajus libidinosus) answered the task above chance level. Taken together, these results corroborate the reliability of this new computer-based SDRST as a measure of visuospatial WM in clinical and non-clinical populations as well as in non-human primates. Its tablet app allows the task to be administered in a wide range of settings, including hospitals, homes, schools, laboratories, universities, and research institutions. PMID:25964758

  7. Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.

    PubMed

    Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W

    2006-08-01

    Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.

  8. Gender differences in the relationship between social communication and emotion recognition.

    PubMed

    Kothari, Radha; Skuse, David; Wakefield, Justin; Micali, Nadia

    2013-11-01

    To investigate the association between autistic traits and emotion recognition in a large community sample of children using facial and social motion cues, additionally stratifying by gender. A general population sample of 3,666 children from the Avon Longitudinal Study of Parents and Children (ALSPAC) were assessed on their ability to correctly recognize emotions using the faces subtest of the Diagnostic Analysis of Non-Verbal Accuracy, and the Emotional Triangles Task, a novel test assessing recognition of emotion from social motion cues. Children with autistic-like social communication difficulties, as assessed by the Social Communication Disorders Checklist, were compared with children without such difficulties. Autistic-like social communication difficulties were associated with poorer recognition of emotion from social motion cues in both genders, but were associated with poorer facial emotion recognition in boys only (odds ratio = 1.9, 95% CI = 1.4, 2.6, p = .0001). This finding must be considered in light of lower power to detect differences in girls. In this community sample of children, greater deficits in social communication skills are associated with poorer discrimination of emotions, implying there may be an underlying continuum of liability to the association between these characteristics. As a similar degree of association was observed in both genders on a novel test of social motion cues, the relatively good performance of girls on the more familiar task of facial emotion discrimination may be due to compensatory mechanisms. Our study might indicate the existence of a cognitive process by which girls with underlying autistic traits can compensate for their covert deficits in emotion recognition, although this would require further investigation. Copyright © 2013 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. The involvement of emotion recognition in affective theory of mind.

    PubMed

    Mier, Daniela; Lis, Stefanie; Neuthe, Kerstin; Sauer, Carina; Esslinger, Christine; Gallhofer, Bernd; Kirsch, Peter

    2010-11-01

    This study was conducted to explore the relationship between emotion recognition and affective Theory of Mind (ToM). Forty subjects performed a facial emotion recognition and an emotional intention recognition task (affective ToM) in an event-related fMRI study. Conjunction analysis revealed overlapping activation during both tasks. Activation in some of these conjunctly activated regions was even stronger during affective ToM than during emotion recognition, namely in the inferior frontal gyrus, the superior temporal sulcus, the temporal pole, and the amygdala. In contrast to previous studies investigating ToM, we found no activation in the anterior cingulate, commonly assumed as the key region for ToM. The results point to a close relationship of emotion recognition and affective ToM and can be interpreted as evidence for the assumption that at least basal forms of ToM occur by an embodied, non-cognitive process. Copyright © 2010 Society for Psychophysiological Research.

  10. Diminished neural and cognitive responses to facial expressions of disgust in patients with psoriasis: a functional magnetic resonance imaging study.

    PubMed

    Kleyn, C Elise; McKie, Shane; Ross, Andrew R; Montaldi, Daniela; Gregory, Lloyd J; Elliott, Rebecca; Isaacs, Clare L; Anderson, Ian M; Richards, Helen L; Deakin, J F William; Fortune, Donal G; Griffiths, Christopher E M

    2009-11-01

    Psoriasis produces significant psychosocial disability; however, little is understood about the neurocognitive mechanisms that mediate the adverse consequences of the social stigma associated with visible skin lesions, such as disgusted facial expressions of others. Both the feeling of disgust and the observation of disgust in others are known to activate the insula cortex. We investigated whether the social impact of psoriasis is associated with altered cognitive processing of disgust using (i) a covert recognition of faces task conducted using functional magnetic resonance imaging (fMRI) and (ii) the facial expression recognition task (FERT), a decision-making task, conducted outside the scanner to assess the ability to recognize overtly different intensities of disgust. Thirteen right-handed male patients with psoriasis and 13 age-matched male controls were included. In the fMRI study, psoriasis patients had significantly (P<0.005) smaller signal responses to disgusted faces in the bilateral insular cortex compared with healthy controls. These data were corroborated by FERT, in that patients were less able than controls to identify all intensities of disgust tested. We hypothesize that patients with psoriasis, in this case male patients, develop a coping mechanism to protect them from stressful emotional responses by blocking the processing of disgusted facial expressions.

  11. Psilocybin biases facial recognition, goal-directed behavior, and mood state toward positive relative to negative emotions through different serotonergic subreceptors.

    PubMed

    Kometer, Michael; Schmidt, André; Bachmann, Rosilla; Studerus, Erich; Seifritz, Erich; Vollenweider, Franz X

    2012-12-01

    Serotonin (5-HT) 1A and 2A receptors have been associated with dysfunctional emotional processing biases in mood disorders. These receptors further predominantly mediate the subjective and behavioral effects of psilocybin and might be important for its recently suggested antidepressive effects. However, the effect of psilocybin on emotional processing biases and the specific contribution of 5-HT2A receptors across different emotional domains is unknown. In a randomized, double-blind study, 17 healthy human subjects received on 4 separate days placebo, psilocybin (215 μg/kg), the preferential 5-HT2A antagonist ketanserin (50 mg), or psilocybin plus ketanserin. Mood states were assessed by self-report ratings, and behavioral and event-related potential measurements were used to quantify facial emotional recognition and goal-directed behavior toward emotional cues. Psilocybin enhanced positive mood and attenuated recognition of negative facial expression. Furthermore, psilocybin increased goal-directed behavior toward positive compared with negative cues, facilitated positive but inhibited negative sequential emotional effects, and valence-dependently attenuated the P300 component. Ketanserin alone had no effects but blocked the psilocybin-induced mood enhancement and decreased recognition of negative facial expression. This study shows that psilocybin shifts the emotional bias across various psychological domains and that activation of 5-HT2A receptors is central in mood regulation and emotional face recognition in healthy subjects. These findings may not only have implications for the pathophysiology of dysfunctional emotional biases but may also provide a framework to delineate the mechanisms underlying psylocybin's putative antidepressant effects. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  12. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  13. Forensic Facial Reconstruction: The Final Frontier.

    PubMed

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  14. Dissociation of Neural Substrates of Response Inhibition to Negative Information between Implicit and Explicit Facial Go/Nogo Tasks: Evidence from an Electrophysiological Study

    PubMed Central

    Sun, Shiyue; Carretié, Luis; Zhang, Lei; Dong, Yi; Zhu, Chunyan; Luo, Yuejia; Wang, Kai

    2014-01-01

    Background Although ample evidence suggests that emotion and response inhibition are interrelated at the behavioral and neural levels, neural substrates of response inhibition to negative facial information remain unclear. Thus we used event-related potential (ERP) methods to explore the effects of explicit and implicit facial expression processing in response inhibition. Methods We used implicit (gender categorization) and explicit emotional Go/Nogo tasks (emotion categorization) in which neutral and sad faces were presented. Electrophysiological markers at the scalp and the voxel level were analyzed during the two tasks. Results We detected a task, emotion and trial type interaction effect in the Nogo-P3 stage. Larger Nogo-P3 amplitudes during sad conditions versus neutral conditions were detected with explicit tasks. However, the amplitude differences between the two conditions were not significant for implicit tasks. Source analyses on P3 component revealed that right inferior frontal junction (rIFJ) was involved during this stage. The current source density (CSD) of rIFJ was higher with sad conditions compared to neutral conditions for explicit tasks, rather than for implicit tasks. Conclusions The findings indicated that response inhibition was modulated by sad facial information at the action inhibition stage when facial expressions were processed explicitly rather than implicitly. The rIFJ may be a key brain region in emotion regulation. PMID:25330212

  15. Facial decoding in schizophrenia is underpinned by basic visual processing impairments.

    PubMed

    Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric

    2017-09-01

    Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  16. [Explicit memory for type font of words in source monitoring and recognition tasks].

    PubMed

    Hatanaka, Yoshiko; Fujita, Tetsuya

    2004-02-01

    We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.

  17. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

  18. Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task

    PubMed Central

    López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa

    2013-01-01

    In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436

  19. [Recognition of facial expressions of emotions by 3-year-olds depending on sleep and risk of depression].

    PubMed

    Bat-Pitault, F; Da Fonseca, D; Flori, S; Porcher-Guinet, V; Stagnara, C; Patural, H; Franco, P; Deruelle, C

    2017-10-01

    The emotional process is characterized by a negative bias in depression, thus it was legitimate to establish if they same is true in very young at-risk children. Furthermore, sleep, also proposed as a marker of the depression risk, is closely linked in adults and adolescents with emotions. That is why we wanted first to better describe the characteristics of emotional recognition by 3-year-olds and their links with sleep. Secondly we observed, if found at this young age, an emotional recognition pattern indicating a vulnerability to depression. We studied, in 133 children aged 36 months from the AuBE cohort, the number of correct answers to the task of recognition of facial emotions (joy, anger and sadness). Cognitive functions were also assessed by the WPPSI III at 3 years old, and the different sleep parameters (time of light off and light on, sleep times, difficulty to go to sleep and number of parents' awakes per night) were described by questionnaires filled out by mothers at 6, 12, 18, 24 and 36 months after birth. Of these 133 children, 21 children whose mothers had at least one history of depression (13 boys) were the high-risk group and 19 children (8 boys) born to women with no history of depression were the low-risk group (or control group). Overall, 133 children by the age of 36 months recognize significantly better happiness than other emotions (P=0.000) with a better global recognition higher in girls (M=8.8) than boys (M=7.8) (P=0.013) and a positive correlation between global recognition ability and verbal IQ (P=0.000). Children who have less daytime sleep at 18 months and those who sleep less at 24 months show a better recognition of sadness (P=0.043 and P=0.042); those with difficulties at bedtime at 18 months recognize less happiness (P=0.043), and those who awaken earlier at 24 months have a better global recognition of emotions (P=0.015). Finally, the boys of the high-risk group recognize sadness better than boys in the control group (P=0

  20. Theory of Mind, Emotion Recognition and Social Perception in Individuals at Clinical High Risk for Psychosis: findings from the NAPLS-2 cohort.

    PubMed

    Barbato, Mariapaola; Liu, Lu; Cadenhead, Kristin S; Cannon, Tyrone D; Cornblatt, Barbara A; McGlashan, Thomas H; Perkins, Diana O; Seidman, Larry J; Tsuang, Ming T; Walker, Elaine F; Woods, Scott W; Bearden, Carrie E; Mathalon, Daniel H; Heinssen, Robert; Addington, Jean

    2015-09-01

    Social cognition, the mental operations that underlie social interactions, is a major construct to investigate in schizophrenia. Impairments in social cognition are present before the onset of psychosis, and even in unaffected first-degree relatives, suggesting that social cognition may be a trait marker of the illness. In a large cohort of individuals at clinical high risk for psychosis (CHR) and healthy controls, three domains of social cognition (theory of mind, facial emotion recognition and social perception) were assessed to clarify which domains are impaired in this population. Six-hundred and seventy-five CHR individuals and 264 controls, who were part of the multi-site North American Prodromal Longitudinal Study, completed The Awareness of Social Inference Test , the Penn Emotion Recognition task , the Penn Emotion Differentiation task , and the Relationship Across Domains , measures of theory of mind, facial emotion recognition, and social perception, respectively. Social cognition was not related to positive and negative symptom severity, but was associated with age and IQ. CHR individuals demonstrated poorer performance on all measures of social cognition. However, after controlling for age and IQ, the group differences remained significant for measures of theory of mind and social perception, but not for facial emotion recognition. Theory of mind and social perception are impaired in individuals at CHR for psychosis. Age and IQ seem to play an important role in the arising of deficits in facial affect recognition. Future studies should examine the stability of social cognition deficits over time and their role, if any, in the development of psychosis.

  1. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  2. Effect of positive emotion on consolidation of memory for faces: the modulation of facial valence and facial gender.

    PubMed

    Wang, Bo

    2013-01-01

    Studies have shown that emotion elicited after learning enhances memory consolidation. However, no prior studies have used facial photos as stimuli. This study examined the effect of post-learning positive emotion on consolidation of memory for faces. During the learning participants viewed neutral, positive, or negative faces. Then they were assigned to a condition in which they either watched a 9-minute positive video clip, or a 9-minute neutral video. Then 30 minutes after the learning participants took a surprise memory test, in which they made "remember", "know", and "new" judgements. The findings are: (1) Positive emotion enhanced consolidation of recognition for negative male faces, but impaired consolidation of recognition for negative female faces; (2) For males, recognition for negative faces was equivalent to that for positive faces; for females, recognition for negative faces was better than that for positive faces. Our study provides the important evidence that effect of post-learning emotion on memory consolidation can extend to facial stimuli and such an effect can be modulated by facial valence and facial gender. The findings may shed light on establishing models concerning the influence of emotion on memory consolidation.

  3. Effects of Context and Facial Expression on Imitation Tasks in Preschool Children with Autism

    ERIC Educational Resources Information Center

    Markodimitraki, Maria; Kypriotaki, Maria; Ampartzaki, Maria; Manolitsis, George

    2013-01-01

    The present study explored the effect of the context in which an imitation act occurs (elicited/spontaneous) and the experimenter's facial expression (neutral or smiling) during the imitation task with young children with autism and typically developing children. The participants were 10 typically developing children and 10 children with autism…

  4. Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness.

    PubMed

    Ambert-Dahan, Emmanuèle; Giraud, Anne-Lise; Mecheri, Halima; Sterkers, Olivier; Mosnier, Isabelle; Samson, Séverine

    2017-10-01

    Visual processing has been extensively explored in deaf subjects in the context of verbal communication, through the assessment of speech reading and sign language abilities. However, little is known about visual emotional processing in adult progressive deafness, and after cochlear implantation. The goal of our study was thus to assess the influence of acquired post-lingual progressive deafness on the recognition of dynamic facial emotions that were selected to express canonical fear, happiness, sadness, and anger. A total of 23 adults with post-lingual deafness separated into two groups; those assessed either before (n = 10) and those assessed after (n = 13) cochlear implantation (CI); and 13 normal hearing (NH) individuals participated in the current study. Participants were asked to rate the expression of the four cardinal emotions, and to evaluate both their emotional valence (unpleasant-pleasant) and arousal potential (relaxing-stimulating). We found that patients with deafness were impaired in the recognition of sad faces, and that patients equipped with a CI were additionally impaired in the recognition of happiness and fear (but not anger). Relative to controls, all patients with deafness showed a deficit in perceiving arousal expressed in faces, while valence ratings remained unaffected. The current results show for the first time that acquired and progressive deafness is associated with a reduction of emotional sensitivity to visual stimuli. This negative impact of progressive deafness on the perception of dynamic facial cues for emotion recognition contrasts with the proficiency of deaf subjects with and without CIs in processing visual speech cues (Rouger et al., 2007; Strelnikov et al., 2009; Lazard and Giraud, 2017). Altogether these results suggest there to be a trade-off between the processing of linguistic and non-linguistic visual stimuli. Copyright © 2017. Published by Elsevier B.V.

  5. Gaze behavior predicts memory bias for angry facial expressions in stable dysphoria.

    PubMed

    Wells, Tony T; Beevers, Christopher G; Robison, Adrienne E; Ellis, Alissa J

    2010-12-01

    Interpersonal theories suggest that depressed individuals are sensitive to signs of interpersonal rejection, such as angry facial expressions. The present study examined memory bias for happy, sad, angry, and neutral facial expressions in stably dysphoric and stably nondysphoric young adults. Participants' gaze behavior (i.e., fixation duration, number of fixations, and distance between fixations) while viewing these facial expressions was also assessed. Using signal detection analyses, the dysphoric group had better accuracy on a surprise recognition task for angry faces than the nondysphoric group. Further, mediation analyses indicated that greater breadth of attentional focus (i.e., distance between fixations) accounted for enhanced recall of angry faces among the dysphoric group. There were no differences between dysphoria groups in gaze behavior or memory for sad, happy, or neutral facial expressions. Findings from this study identify a specific cognitive mechanism (i.e., breadth of attentional focus) that accounts for biased recall of angry facial expressions in dysphoria. This work also highlights the potential for integrating cognitive and interpersonal theories of depression.

  6. Measuring facial expression of emotion.

    PubMed

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  7. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  8. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    PubMed

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  9. The effects of alcohol on the recognition of facial expressions and microexpressions of emotion: enhanced recognition of disgust and contempt.

    PubMed

    Felisberti, Fatima; Terry, Philip

    2015-09-01

    The study compared alcohol's effects on the recognition of briefly displayed facial expressions of emotion (so-called microexpressions) with expressions presented for a longer period. Using a repeated-measures design, we tested 18 participants three times (counterbalanced), after (i) a placebo drink, (ii) a low-to-moderate dose of alcohol (0.17 g/kg women; 0.20 g/kg men) and (iii) a moderate-to-high dose of alcohol (0.52 g/kg women; 0.60 g/kg men). On each session, participants were presented with stimuli representing six emotions (happiness, sadness, anger, fear, disgust and contempt) overlaid on a generic avatar in a six-alternative forced-choice paradigm. A neutral expression (1 s) preceded and followed a target expression presented for 200 ms (microexpressions) or 400 ms. Participants mouse clicked the correct answer. The recognition of disgust was significantly better after the high dose of alcohol than after the low dose or placebo drinks at both durations of stimulus presentation. A similar profile of effects was found for the recognition of contempt. There were no effects on response latencies. Alcohol can increase sensitivity to expressions of disgust and contempt. Such effects are not dependent on stimulus duration up to 400 ms and may reflect contextual modulation of alcohol's effects on emotion recognition. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Eye-Tracking Study on Facial Emotion Recognition Tasks in Individuals with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tsang, Vicky

    2018-01-01

    The eye-tracking experiment was carried out to assess fixation duration and scan paths that individuals with and without high-functioning autism spectrum disorders employed when identifying simple and complex emotions. Participants viewed human photos of facial expressions and decided on the identification of emotion, the negative-positive emotion…

  11. Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.

    PubMed

    Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu

    2015-01-01

    Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.

  12. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence

    PubMed Central

    Liu, Xinyang; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Cai, Xinxia; Wilhelm, Oliver

    2017-01-01

    Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT) Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN), when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account or not. These

  13. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    PubMed

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  14. Italian normative data and validation of two neuropsychological tests of face recognition: Benton Facial Recognition Test and Cambridge Face Memory Test.

    PubMed

    Albonico, Andrea; Malaspina, Manuela; Daini, Roberta

    2017-09-01

    The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.

  15. Facial emotion recognition in male antisocial personality disorders with or without adult attention deficit hyperactivity disorder.

    PubMed

    Bagcioglu, Erman; Isikli, Hasmet; Demirel, Husrev; Sahin, Esat; Kandemir, Eyup; Dursun, Pinar; Yuksek, Erhan; Emul, Murat

    2014-07-01

    We aimed to investigate facial emotion recognition abilities in violent individuals with antisocial personality disorder who have comorbid attention deficient hyperactivity disorder (ADHD) or not. The photos of happy, surprised, fearful, sad, angry, disgust, and neutral facial expressions and Wender Utah Rating Scale have been performed in all groups. The mean ages were as follows: in antisocial personality disorder with ADHD 22.0 ± 1.59, in pure antisocial individuals 21.90 ± 1.80 and in controls 22.97 ± 2.85 (p>0.05). The mean score in Wender Utah Rating Scale was significantly different between groups (p<0.001). The mean accurate responses to each facial emotion between groups were insignificant (p>0.05) excluding disgust faces which was significantly impaired in ASPD+ADHD and pure ASPD groups. Antisocial individuals with attention deficient and hyperactivity had spent significantly more time to each facial emotion than healthy controls (p<0.05) while pure antisocial individual had more time to recognize disgust and neutral faces than healthy controls (p<0.05). Study of complex social cognitive abilities in adults with ADHD and violent behaviors is lacking. This study is the first, investigating the differences according to social cognition cues in violent individual that revealed no significance within pure antisocial individuals and antisocial individuals with ADHD. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Movement cues aid face recognition in developmental prosopagnosia.

    PubMed

    Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah

    2015-11-01

    Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).

  17. Impaired recognition of body expressions in the behavioral variant of frontotemporal dementia.

    PubMed

    Van den Stock, Jan; De Winter, François-Laurent; de Gelder, Beatrice; Rangarajan, Janaki Raman; Cypers, Gert; Maes, Frederik; Sunaert, Stefan; Goffin, Karolien; Vandenberghe, Rik; Vandenbulcke, Mathieu

    2015-08-01

    Progressive deterioration of social cognition and emotion processing are core symptoms of the behavioral variant of frontotemporal dementia (bvFTD). Here we investigate whether bvFTD is also associated with impaired recognition of static (Experiment 1) and dynamic (Experiment 2) bodily expressions. In addition, we compared body expression processing with processing of static (Experiment 3) and dynamic (Experiment 4) facial expressions, as well as with face identity processing (Experiment 5). The results reveal that bvFTD is associated with impaired recognition of static and dynamic bodily and facial expressions, while identity processing was intact. No differential impairments were observed regarding motion (static vs. dynamic) or category (body vs. face). Within the bvFTD group, we observed a significant partial correlation between body and face expression recognition, when controlling for performance on the identity task. Voxel-Based Morphometry (VBM) analysis revealed that body emotion recognition was positively associated with gray matter volume in a region of the inferior frontal gyrus (pars orbitalis/triangularis). The results are in line with a supramodal emotion recognition deficit in bvFTD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. "We all look the same to me": positive emotions eliminate the own-race in face recognition.

    PubMed

    Johnson, Kareem J; Fredrickson, Barbara L

    2005-11-01

    Extrapolating from the broaden-and-build theory, we hypothesized that positive emotion may reduce the own-race bias in facial recognition. In Experiments 1 and 2, Caucasian participants (N = 89) viewed Black and White faces for a recognition task. They viewed videos eliciting joy, fear, or neutrality before the learning (Experiment 1) or testing (Experiment 2) stages of the task. Results reliably supported the hypothesis. Relative to fear or a neutral state, joy experienced before either stage improved recognition of Black faces and significantly reduced the own-race bias. Discussion centers on possible mechanisms for this reduction of the own-race bias, including improvements in holistic processing and promotion of a common in-group identity due to positive emotions.

  19. Holistic processing, contact, and the other-race effect in face recognition.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-12-01

    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Emotional Processing, Recognition, Empathy and Evoked Facial Expression in Eating Disorders: An Experimental Study to Map Deficits in Social Cognition

    PubMed Central

    Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet

    2015-01-01

    Background Difficulties in social cognition have been identified in eating disorders (EDs), but the exact profile of these abnormalities is unclear. The aim of this study is to examine distinct processes of social-cognition in this patient group, including attentional processing and recognition, empathic reaction and evoked facial expression in response to discrete vignettes of others displaying positive (i.e. happiness) or negative (i.e. sadness and anger) emotions. Method One hundred and thirty-eight female participants were included in the study: 73 healthy controls (HCs) and 65 individuals with an ED (49 with Anorexia Nervosa and 16 with Bulimia Nervosa). Self-report and behavioural measures were used. Results Participants with EDs did not display specific abnormalities in emotional processing, recognition and empathic response to others’ basic discrete emotions. However, they had poorer facial expressivity and a tendency to turn away from emotional displays. Conclusion Treatments focusing on the development of non-verbal emotional communication skills might be of benefit for patients with EDs. PMID:26252220