Hargreaves, A; Mothersill, O; Anderson, M; Lawless, S; Corvin, A; Donohoe, G
2016-10-28
Deficits in facial emotion recognition have been associated with functional impairments in patients with Schizophrenia (SZ). Whilst a strong ecological argument has been made for the use of both dynamic facial expressions and varied emotion intensities in research, SZ emotion recognition studies to date have primarily used static stimuli of a singular, 100%, intensity of emotion. To address this issue, the present study aimed to investigate accuracy of emotion recognition amongst patients with SZ and healthy subjects using dynamic facial emotion stimuli of varying intensities. To this end an emotion recognition task (ERT) designed by Montagne (2007) was adapted and employed. 47 patients with a DSM-IV diagnosis of SZ and 51 healthy participants were assessed for emotion recognition. Results of the ERT were tested for correlation with performance in areas of cognitive ability typically found to be impaired in psychosis, including IQ, memory, attention and social cognition. Patients were found to perform less well than healthy participants at recognising each of the 6 emotions analysed. Surprisingly, however, groups did not differ in terms of impact of emotion intensity on recognition accuracy; for both groups higher intensity levels predicted greater accuracy, but no significant interaction between diagnosis and emotional intensity was found for any of the 6 emotions. Accuracy of emotion recognition was, however, more strongly correlated with cognition in the patient cohort. Whilst this study demonstrates the feasibility of using ecologically valid dynamic stimuli in the study of emotion recognition accuracy, varying the intensity of the emotion displayed was not demonstrated to impact patients and healthy participants differentially, and thus may not be a necessary variable to include in emotion recognition research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander
2014-09-01
The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.
Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini
2014-09-01
Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Yu, Shao Hua; Zhu, Jun Peng; Xu, You; Zheng, Lei Lei; Chai, Hao; He, Wei; Liu, Wei Bo; Li, Hui Chun; Wang, Wei
2012-12-01
To study the contribution of executive function to abnormal recognition of facial expressions of emotion in schizophrenia patients. Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test (WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness. Copyright © 2012 The Editorial Board of Biomedical and Environmental Sciences. Published by Elsevier B.V. All rights reserved.
Wolf, Richard C; Pujara, Maia; Baskaya, Mustafa K; Koenigs, Michael
2016-09-01
Facial emotion recognition is a critical aspect of human communication. Since abnormalities in facial emotion recognition are associated with social and affective impairment in a variety of psychiatric and neurological conditions, identifying the neural substrates and psychological processes underlying facial emotion recognition will help advance basic and translational research on social-affective function. Ventromedial prefrontal cortex (vmPFC) has recently been implicated in deploying visual attention to the eyes of emotional faces, although there is mixed evidence regarding the importance of this brain region for recognition accuracy. In the present study of neurological patients with vmPFC damage, we used an emotion recognition task with morphed facial expressions of varying intensities to determine (1) whether vmPFC is essential for emotion recognition accuracy, and (2) whether instructed attention to the eyes of faces would be sufficient to improve any accuracy deficits. We found that vmPFC lesion patients are impaired, relative to neurologically healthy adults, at recognizing moderate intensity expressions of anger and that recognition accuracy can be improved by providing instructions of where to fixate. These results suggest that vmPFC may be important for the recognition of facial emotion through a role in guiding visual attention to emotionally salient regions of faces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Emotion recognition and social adjustment in school-aged girls and boys.
Leppänen, J M; Hietanen, J K
2001-12-01
The present study investigated emotion recognition accuracy and its relation to social adjustment in 7-10 year-old children. The ability to recognize basic emotions from facial and vocal expressions was measured and compared to peer popularity and to teacher-rated social competence. The results showed that emotion recognition was related to these measures of social adjustment, but the gender of a child and emotion category affected this relationship. Emotion recognition accuracy was significantly related to social adjustment for the girls, but not for the boys. For the girls, especially the recognition of surprise was related to social adjustment. Together, these results suggest that the ability to recognize others' emotional states from nonverbal cues is an important socio-cognitive ability for school-aged girls.
Facial emotion recognition and borderline personality pathology.
Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio
2017-09-01
The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Horton, Leslie E; Bridgwater, Miranda A; Haas, Gretchen L
2017-05-01
Emotion recognition, a social cognition domain, is impaired in people with schizophrenia and contributes to social dysfunction. Whether impaired emotion recognition emerges as a manifestation of illness or predates symptoms is unclear. Findings from studies of emotion recognition impairments in first-degree relatives of people with schizophrenia are mixed and, to our knowledge, no studies have investigated the link between emotion recognition and social functioning in that population. This study examined facial affect recognition and social skills in 16 offspring of parents with schizophrenia (familial high-risk/FHR) compared to 34 age- and sex-matched healthy controls (HC), ages 7-19. As hypothesised, FHR children exhibited impaired overall accuracy, accuracy in identifying fearful faces, and overall recognition speed relative to controls. Age-adjusted facial affect recognition accuracy scores predicted parent's overall rating of their child's social skills for both groups. This study supports the presence of facial affect recognition deficits in FHR children. Importantly, as the first known study to suggest the presence of these deficits in young, asymptomatic FHR children, it extends findings to a developmental stage predating symptoms. Further, findings point to a relationship between early emotion recognition and social skills. Improved characterisation of deficits in FHR children could inform early intervention.
Intelligibility of emotional speech in younger and older adults.
Dupuis, Kate; Pichora-Fuller, M Kathleen
2014-01-01
Little is known about the influence of vocal emotions on speech understanding. Word recognition accuracy for stimuli spoken to portray seven emotions (anger, disgust, fear, sadness, neutral, happiness, and pleasant surprise) was tested in younger and older listeners. Emotions were presented in either mixed (heterogeneous emotions mixed in a list) or blocked (homogeneous emotion blocked in a list) conditions. Three main hypotheses were tested. First, vocal emotion affects word recognition accuracy; specifically, portrayals of fear enhance word recognition accuracy because listeners orient to threatening information and/or distinctive acoustical cues such as high pitch mean and variation. Second, older listeners recognize words less accurately than younger listeners, but the effects of different emotions on intelligibility are similar across age groups. Third, blocking emotions in list results in better word recognition accuracy, especially for older listeners, and reduces the effect of emotion on intelligibility because as listeners develop expectations about vocal emotion, the allocation of processing resources can shift from emotional to lexical processing. Emotion was the within-subjects variable: all participants heard speech stimuli consisting of a carrier phrase followed by a target word spoken by either a younger or an older talker, with an equal number of stimuli portraying each of seven vocal emotions. The speech was presented in multi-talker babble at signal to noise ratios adjusted for each talker and each listener age group. Listener age (younger, older), condition (mixed, blocked), and talker (younger, older) were the main between-subjects variables. Fifty-six students (Mage= 18.3 years) were recruited from an undergraduate psychology course; 56 older adults (Mage= 72.3 years) were recruited from a volunteer pool. All participants had clinically normal pure-tone audiometric thresholds at frequencies ≤3000 Hz. There were significant main effects of emotion, listener age group, and condition on the accuracy of word recognition in noise. Stimuli spoken in a fearful voice were the most intelligible, while those spoken in a sad voice were the least intelligible. Overall, word recognition accuracy was poorer for older than younger adults, but there was no main effect of talker, and the pattern of the effects of different emotions on intelligibility did not differ significantly across age groups. Acoustical analyses helped elucidate the effect of emotion and some intertalker differences. Finally, all participants performed better when emotions were blocked. For both groups, performance improved over repeated presentations of each emotion in both blocked and mixed conditions. These results are the first to demonstrate a relationship between vocal emotion and word recognition accuracy in noise for younger and older listeners. In particular, the enhancement of intelligibility by emotion is greatest for words spoken to portray fear and presented heterogeneously with other emotions. Fear may have a specialized role in orienting attention to words heard in noise. This finding may be an auditory counterpart to the enhanced detection of threat information in visual displays. The effect of vocal emotion on word recognition accuracy is preserved in older listeners with good audiograms and both age groups benefit from blocking and the repetition of emotions.
Hoffmann, Holger; Kessler, Henrik; Eppel, Tobias; Rukavina, Stefanie; Traue, Harald C
2010-11-01
Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.
Emotion recognition from multichannel EEG signals using K-nearest neighbor classification.
Li, Mi; Xu, Hongpei; Liu, Xingwang; Lu, Shengfu
2018-04-27
Many studies have been done on the emotion recognition based on multi-channel electroencephalogram (EEG) signals. This paper explores the influence of the emotion recognition accuracy of EEG signals in different frequency bands and different number of channels. We classified the emotional states in the valence and arousal dimensions using different combinations of EEG channels. Firstly, DEAP default preprocessed data were normalized. Next, EEG signals were divided into four frequency bands using discrete wavelet transform, and entropy and energy were calculated as features of K-nearest neighbor Classifier. The classification accuracies of the 10, 14, 18 and 32 EEG channels based on the Gamma frequency band were 89.54%, 92.28%, 93.72% and 95.70% in the valence dimension and 89.81%, 92.24%, 93.69% and 95.69% in the arousal dimension. As the number of channels increases, the classification accuracy of emotional states also increases, the classification accuracy of the gamma frequency band is greater than that of the beta frequency band followed by the alpha and theta frequency bands. This paper provided better frequency bands and channels reference for emotion recognition based on EEG.
Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng
2015-10-08
Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P < .05). There were no correlations between executive functions and facial emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.
Dmitrieva, E S; Gel'man, V Ia
2011-01-01
The listener-distinctive features of recognition of different emotional intonations (positive, negative and neutral) of male and female speakers in the presence or absence of background noise were studied in 49 adults aged 20-79 years. In all the listeners noise produced the most pronounced decrease in recognition accuracy for positive emotional intonation ("joy") as compared to other intonations, whereas it did not influence the recognition accuracy of "anger" in 65-79-year-old listeners. The higher emotion recognition rates of a noisy signal were observed for speech emotional intonations expressed by female speakers. Acoustic characteristics of noisy and clear speech signals underlying perception of speech emotional prosody were found for adult listeners of different age and gender.
Uskul, Ayse K; Paulmann, Silke; Weick, Mario
2016-02-01
Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).
Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.
Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M
2017-01-01
Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).
Facial and prosodic emotion recognition in social anxiety disorder.
Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei
2017-07-01
Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.
Gender affects body language reading.
Sokolov, Arseny A; Krüger, Samuel; Enck, Paul; Krägeloh-Mann, Ingeborg; Pavlova, Marina A
2011-01-01
Body motion is a rich source of information for social cognition. However, gender effects in body language reading are largely unknown. Here we investigated whether, and, if so, how recognition of emotional expressions revealed by body motion is gender dependent. To this end, females and males were presented with point-light displays portraying knocking at a door performed with different emotional expressions. The findings show that gender affects accuracy rather than speed of body language reading. This effect, however, is modulated by emotional content of actions: males surpass in recognition accuracy of happy actions, whereas females tend to excel in recognition of hostile angry knocking. Advantage of women in recognition accuracy of neutral actions suggests that females are better tuned to the lack of emotional content in body actions. The study provides novel insights into understanding of gender effects in body language reading, and helps to shed light on gender vulnerability to neuropsychiatric and neurodevelopmental impairments in visual social cognition.
Chu, Simon; McNeill, Kimberley; Ireland, Jane L; Qurashi, Inti
2015-12-15
We investigated the relationship between a change in sleep quality and facial emotion recognition accuracy in a group of mentally-disordered inpatients at a secure forensic psychiatric unit. Patients whose sleep improved over time also showed improved facial emotion recognition while patients who showed no sleep improvement showed no change in emotion recognition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240
Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Impact of severity of drug use on discrete emotions recognition in polysubstance abusers.
Fernández-Serrano, María José; Lozano, Oscar; Pérez-García, Miguel; Verdejo-García, Antonio
2010-06-01
Neuropsychological studies support the association between severity of drug intake and alterations in specific cognitive domains and neural systems, but there is disproportionately less research on the neuropsychology of emotional alterations associated with addiction. One of the key aspects of adaptive emotional functioning potentially relevant to addiction progression and treatment is the ability to recognize basic emotions in the faces of others. Therefore, the aims of this study were: (i) to examine facial emotion recognition in abstinent polysubstance abusers, and (ii) to explore the association between patterns of quantity and duration of use of several drugs co-abused (including alcohol, cannabis, cocaine, heroin and MDMA) and the ability to identify discrete facial emotional expressions portraying basic emotions. We compared accuracy of emotion recognition of facial expressions portraying six basic emotions (measured with the Ekman Faces Test) between polysubstance abusers (PSA, n=65) and non-drug using comparison individuals (NDCI, n=30), and used regression models to explore the association between quantity and duration of use of the different drugs co-abused and indices of recognition of each of the six emotions, while controlling for relevant socio-demographic and affect-related confounders. Results showed: (i) that PSA had significantly poorer recognition than NDCI for facial expressions of anger, disgust, fear and sadness; (ii) that measures of quantity and duration of drugs used significantly predicted poorer discrete emotions recognition: quantity of cocaine use predicted poorer anger recognition, and duration of cocaine use predicted both poorer anger and fear recognition. Severity of cocaine use also significantly predicted overall recognition accuracy. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.
Jürgens, Rebecca; Grass, Annika; Drolet, Matthis; Fischer, Julia
Both in the performative arts and in emotion research, professional actors are assumed to be capable of delivering emotions comparable to spontaneous emotional expressions. This study examines the effects of acting training on vocal emotion depiction and recognition. We predicted that professional actors express emotions in a more realistic fashion than non-professional actors. However, professional acting training may lead to a particular speech pattern; this might account for vocal expressions by actors that are less comparable to authentic samples than the ones by non-professional actors. We compared 80 emotional speech tokens from radio interviews with 80 re-enactments by professional and inexperienced actors, respectively. We analyzed recognition accuracies for emotion and authenticity ratings and compared the acoustic structure of the speech tokens. Both play-acted conditions yielded similar recognition accuracies and possessed more variable pitch contours than the spontaneous recordings. However, professional actors exhibited signs of different articulation patterns compared to non-trained speakers. Our results indicate that for emotion research, emotional expressions by professional actors are not better suited than those from non-actors.
The automaticity of emotion recognition.
Tracy, Jessica L; Robins, Richard W
2008-02-01
Evolutionary accounts of emotion typically assume that humans evolved to quickly and efficiently recognize emotion expressions because these expressions convey fitness-enhancing messages. The present research tested this assumption in 2 studies. Specifically, the authors examined (a) how quickly perceivers could recognize expressions of anger, contempt, disgust, embarrassment, fear, happiness, pride, sadness, shame, and surprise; (b) whether accuracy is improved when perceivers deliberate about each expression's meaning (vs. respond as quickly as possible); and (c) whether accurate recognition can occur under cognitive load. Across both studies, perceivers quickly and efficiently (i.e., under cognitive load) recognized most emotion expressions, including the self-conscious emotions of pride, embarrassment, and shame. Deliberation improved accuracy in some cases, but these improvements were relatively small. Discussion focuses on the implications of these findings for the cognitive processes underlying emotion recognition.
Effect of Time Delay on Recognition Memory for Pictures: The Modulatory Role of Emotion
Wang, Bo
2014-01-01
This study investigated the modulatory role of emotion in the effect of time delay on recognition memory for pictures. Participants viewed neutral, positive and negative pictures, and took a recognition memory test 5 minutes, 24 hours, or 1 week after learning. The findings are: 1) For neutral, positive and negative pictures, overall recognition accuracy in the 5-min delay did not significantly differ from that in the 24-h delay. For neutral and positive pictures, overall recognition accuracy in the 1-week delay was lower than in the 24-h delay; for negative pictures, overall recognition in the 24-h and 1-week delay did not significantly differ. Therefore negative emotion modulates the effect of time delay on recognition memory, maintaining retention of overall recognition accuracy only within a certain frame of time. 2) For the three types of pictures, recollection and familiarity in the 5-min delay did not significantly differ from that in the 24-h and the 1-week delay. Thus emotion does not appear to modulate the effect of time delay on recollection and familiarity. However, recollection in the 24-h delay was higher than in the 1-week delay, whereas familiarity in the 24-h delay was lower than in the 1-week delay. PMID:24971457
Nonlinguistic vocalizations from online amateur videos for emotion research: A validated corpus.
Anikin, Andrey; Persson, Tomas
2017-04-01
This study introduces a corpus of 260 naturalistic human nonlinguistic vocalizations representing nine emotions: amusement, anger, disgust, effort, fear, joy, pain, pleasure, and sadness. The recognition accuracy in a rating task varied greatly per emotion, from <40% for joy and pain, to >70% for amusement, pleasure, fear, and sadness. In contrast, the raters' linguistic-cultural group had no effect on recognition accuracy: The predominantly English-language corpus was classified with similar accuracies by participants from Brazil, Russia, Sweden, and the UK/USA. Supervised random forest models classified the sounds as accurately as the human raters. The best acoustic predictors of emotion were pitch, harmonicity, and the spacing and regularity of syllables. This corpus of ecologically valid emotional vocalizations can be filtered to include only sounds with high recognition rates, in order to study reactions to emotional stimuli of known perceptual types (reception side), or can be used in its entirety to study the association between affective states and vocal expressions (production side).
ERIC Educational Resources Information Center
Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn
2014-01-01
Deficits in emotion recognition and social interaction characterize individuals with Asperger's Disorder (AS). Moreover they also appear to be less able to accurately use confidence to gauge their emotion recognition accuracy (i.e., metacognitive monitoring). The aim of this study was to extend this finding by considering both monitoring and…
ERIC Educational Resources Information Center
Tell, Dina; Davidson, Denise
2015-01-01
In this research, the emotion recognition abilities of children with autism spectrum disorder and typically developing children were compared. When facial expressions and situational cues of emotion were congruent, accuracy in recognizing emotions was good for both children with autism spectrum disorder and typically developing children. When…
van Bokhorst, Lindsey G; Knapová, Lenka; Majoranc, Kim; Szebeni, Zea K; Táborský, Adam; Tomić, Dragana; Cañadas, Elena
2016-01-01
In many sports, such as figure skating or gymnastics, the outcome of a performance does not rely exclusively on objective measurements, but on more subjective cues. Judges need high attentional capacities to process visual information and overcome fatigue. Also their emotion recognition abilities might have an effect in detecting errors and making a more accurate assessment. Moreover, the scoring given by judges could be also influenced by their level of expertise. This study aims to assess how rhythmic gymnastics judges' emotion recognition and attentional abilities influence accuracy of performance assessment. Data will be collected from rhythmic gymnastics judges and coaches at different international levels. This study will employ an online questionnaire consisting on an emotion recognition test and attentional test. Participants' task is to watch a set of videotaped rhythmic gymnastics performances and evaluate them on the artistic and execution components of performance. Their scoring will be compared with the official scores given at the competition the video was taken from to measure the accuracy of the participants' evaluations. The proposed research represents an interdisciplinary approach that integrates cognitive and sport psychology within experimental and applied contexts. The current study advances the theoretical understanding of how emotional and attentional aspects affect the evaluation of sport performance. The results will provide valuable evidence on the direction and strength of the relationship between the above-mentioned factors and the accuracy of sport performance evaluation. Importantly, practical implications might be drawn from this study. Intervention programs directed at improving the accuracy of judges could be created based on the understanding of how emotion recognition and attentional abilities are related to the accuracy of performance assessment.
van Bokhorst, Lindsey G.; Knapová, Lenka; Majoranc, Kim; Szebeni, Zea K.; Táborský, Adam; Tomić, Dragana; Cañadas, Elena
2016-01-01
In many sports, such as figure skating or gymnastics, the outcome of a performance does not rely exclusively on objective measurements, but on more subjective cues. Judges need high attentional capacities to process visual information and overcome fatigue. Also their emotion recognition abilities might have an effect in detecting errors and making a more accurate assessment. Moreover, the scoring given by judges could be also influenced by their level of expertise. This study aims to assess how rhythmic gymnastics judges’ emotion recognition and attentional abilities influence accuracy of performance assessment. Data will be collected from rhythmic gymnastics judges and coaches at different international levels. This study will employ an online questionnaire consisting on an emotion recognition test and attentional test. Participants’ task is to watch a set of videotaped rhythmic gymnastics performances and evaluate them on the artistic and execution components of performance. Their scoring will be compared with the official scores given at the competition the video was taken from to measure the accuracy of the participants’ evaluations. The proposed research represents an interdisciplinary approach that integrates cognitive and sport psychology within experimental and applied contexts. The current study advances the theoretical understanding of how emotional and attentional aspects affect the evaluation of sport performance. The results will provide valuable evidence on the direction and strength of the relationship between the above-mentioned factors and the accuracy of sport performance evaluation. Importantly, practical implications might be drawn from this study. Intervention programs directed at improving the accuracy of judges could be created based on the understanding of how emotion recognition and attentional abilities are related to the accuracy of performance assessment. PMID:27458406
Recognition of emotion from body language among patients with unipolar depression
Loi, Felice; Vaidya, Jatin G.; Paradiso, Sergio
2013-01-01
Major depression may be associated with abnormal perception of emotions and impairment in social adaptation. Emotion recognition from body language and its possible implications to social adjustment have not been examined in patients with depression. Three groups of participants (51 with depression; 68 with history of depression in remission; and 69 never depressed healthy volunteers) were compared on static and dynamic tasks of emotion recognition from body language. Psychosocial adjustment was assessed using the Social Adjustment Scale Self-Report (SAS-SR). Participants with current depression showed reduced recognition accuracy for happy stimuli across tasks relative to remission and comparison participants. Participants with depression tended to show poorer psychosocial adaptation relative to remission and comparison groups. Correlations between perception accuracy of happiness and scores on the SAS-SR were largely not significant. These results indicate that depression is associated with reduced ability to appraise positive stimuli of emotional body language but emotion recognition performance is not tied to social adjustment. These alterations do not appear to be present in participants in remission suggesting state-like qualities. PMID:23608159
Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.
Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W
2006-08-01
Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.
Zhu, Lianzhang; Chen, Leiming; Zhao, Dehai
2017-01-01
Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. PMID:28737705
Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.
2016-01-01
The study examined children’s recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children (N = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills, which included five tasks (three with faces and two with bodies). Parents and teachers reported on children’s aggressive behaviors and social skills. Children’s emotion accuracy on two of the three facial tasks and one of the body tasks was related to teacher reports of social skills. Some of these relations were moderated by child gender. In particular, the relationships between emotion recognition accuracy and reports of children’s behavior were stronger for boys than girls. Identifying preschool-aged children’s strengths and weaknesses in identification of emotion from faces and body poses may be helpful in guiding interventions with children who have problems with social and behavioral functioning that may be due, in part, to emotional knowledge deficits. Further developmental implications of these findings are discussed. PMID:27057129
Test battery for measuring the perception and recognition of facial expressions of emotion
Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner
2014-01-01
Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528
Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.
ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.
Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu
2016-09-22
Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the realization of a practical EEG-based emotion recognition system.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals.
Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals
Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition. PMID:28356908
Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei
2011-12-01
Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.
Kensinger, Elizabeth A; Choi, Hae-Yoon; Murray, Brendan D; Rajaram, Suparna
2016-07-01
In daily life, emotional events are often discussed with others. The influence of these social interactions on the veracity of emotional memories has rarely been investigated. The authors (Choi, Kensinger, & Rajaram Memory and Cognition, 41, 403-415, 2013) previously demonstrated that when the categorical relatedness of information is controlled, emotional items are more accurately remembered than neutral items. The present study examined whether emotion would continue to improve the accuracy of memory when individuals discussed the emotional and neutral events with others. Two different paradigms involving social influences were used to investigate this question and compare evidence. In both paradigms, participants studied stimuli that were grouped into conceptual categories of positive (e.g., celebration), negative (e.g., funeral), or neutral (e.g., astronomy) valence. After a 48-hour delay, recognition memory was tested for studied items and categorically related lures. In the first paradigm, recognition accuracy was compared when memory was tested individually or in a collaborative triad. In the second paradigm, recognition accuracy was compared when a prior retrieval session had occurred individually or with a confederate who supplied categorically related lures. In both of these paradigms, emotional stimuli were remembered more accurately than were neutral stimuli, and this pattern was preserved when social interaction occurred. In fact, in the first paradigm, there was a trend for collaboration to increase the beneficial effect of emotion on memory accuracy, and in the second paradigm, emotional lures were significantly less susceptible to the "social contagion" effect. Together, these results demonstrate that emotional memories can be more accurate than nonemotional ones even when events are discussed with others (Experiment 1) and even when that discussion introduces misinformation (Experiment 2).
Influence of gender in the recognition of basic facial expressions: A critical literature review
Forni-Santos, Larissa; Osório, Flávia L
2015-01-01
AIM: To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. METHODS: We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. RESULTS: In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. CONCLUSION: The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer’s gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation. PMID:26425447
Recognizing Biological Motion and Emotions from Point-Light Displays in Autism Spectrum Disorders
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P.; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in ‘reading’ body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of ‘biological motion’ and ‘emotions’ from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person’s ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance. PMID:22970227
Isaacowitz, Derek M.; Stanley, Jennifer Tehan
2011-01-01
Older adults perform worse on traditional tests of emotion recognition accuracy than do young adults. In this paper, we review descriptive research to date on age differences in emotion recognition from facial expressions, as well as the primary theoretical frameworks that have been offered to explain these patterns. We propose that this is an area of inquiry that would benefit from an ecological approach in which contextual elements are more explicitly considered and reflected in experimental methods. Use of dynamic displays and examination of specific cues to accuracy, for example, may reveal more nuanced age-related patterns and may suggest heretofore unexplored underlying mechanisms. PMID:22125354
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
Effects of Emotion on Associative Recognition: Valence and Retention Interval Matter
Pierce, Benton H.; Kensinger, Elizabeth A.
2011-01-01
In two experiments, we examined the effects of emotional valence and arousal on associative binding. Participants studied negative, positive, and neutral word pairs, followed by an associative recognition test. In Experiment 1, with a short-delayed test, accuracy for intact pairs was equivalent across valences, whereas accuracy for rearranged pairs was lower for negative than for positive and neutral pairs. In Experiment 2, we tested participants after a one-week delay and found that accuracy was greater for intact negative than for intact neutral pairs, whereas rearranged pair accuracy was equivalent across valences. These results suggest that, although negative emotional valence impairs associative binding after a short delay, it may improve binding after a longer delay. The results also suggest that valence, as well as arousal, needs to be considered when examining the effects of emotion on associative memory. PMID:21401233
Multimodal emotional state recognition using sequence-dependent deep hierarchical features.
Barros, Pablo; Jirak, Doreen; Weber, Cornelius; Wermter, Stefan
2015-12-01
Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Caballero-Morales, Santiago-Omar
2013-01-01
An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410
The Relationship between Emotion Recognition Ability and Social Skills in Young Children with Autism
ERIC Educational Resources Information Center
Williams, Beth T.; Gray, Kylie M.
2013-01-01
This study assessed the relationship between emotion recognition ability and social skills in 42 young children with autistic disorder aged 4-7 years. The analyses revealed that accuracy in recognition of sadness, but not happiness, anger or fear, was associated with higher ratings on the Vineland-II Socialization domain, above and beyond the…
EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY
Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.
2014-01-01
People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896
Processing of emotional reactivity and emotional memory over sleep
Baran, Bengi; Pace-Schott, Edward F.; Ericson, Callie; Spencer, Rebecca M. C.
2012-01-01
Sleep enhances memories, particularly emotional memories. As such, it has been suggested that sleep deprivation may reduce post-traumatic stress disorder. This presumes that emotional memory consolidation is paralleled by a reduction in emotional reactivity, an association that has not yet been examined. In the present experiment, we utilized an incidental memory task in humans and obtained valence and arousal ratings during two sessions separated either by 12 hours of daytime wake or 12 hours including overnight sleep. Recognition accuracy was greater following sleep relative to wake for both negative and neutral pictures. While emotional reactivity to negative pictures was greatly reduced over wake, the negative emotional response was relatively preserved over sleep. Moreover, protection of emotional reactivity was associated with greater time in REM sleep. Recognition accuracy, however, was not associated with REM. Thus, we provide the first evidence that sleep enhances emotional memory while preserving emotional reactivity. PMID:22262901
ERIC Educational Resources Information Center
Bal, Elgiz; Harden, Emily; Lamb, Damon; Van Hecke, Amy Vaughan; Denver, John W.; Porges, Stephen W.
2010-01-01
Respiratory Sinus Arrhythmia (RSA), heart rate, and accuracy and latency of emotion recognition were evaluated in children with autism spectrum disorders (ASD) and typically developing children while viewing videos of faces slowly transitioning from a neutral expression to one of six basic emotions (e.g., anger, disgust, fear, happiness, sadness,…
Empathic competencies in violent offenders☆
Seidel, Eva-Maria; Pfabigan, Daniela Melitta; Keckeis, Katinka; Wucherer, Anna Maria; Jahn, Thomas; Lamm, Claus; Derntl, Birgit
2013-01-01
Violent offending has often been associated with a lack of empathy, but experimental investigations are rare. The present study aimed at clarifying whether violent offenders show a general empathy deficit or specific deficits regarding the separate subcomponents. To this end, we assessed three core components of empathy (emotion recognition, perspective taking, affective responsiveness) as well as skin conductance response (SCR) in a sample of 30 male violent offenders and 30 healthy male controls. Data analysis revealed reduced accuracy in violent offenders compared to healthy controls only in emotion recognition, and that a high number of violent assaults was associated with decreased accuracy in perspective taking for angry scenes. SCR data showed reduced physiological responses in the offender group specifically for fear and disgust stimuli during emotion recognition and perspective taking. In addition, higher psychopathy scores in the violent offender group were associated with reduced accuracy in affective responsiveness. This is the first study to show that mainly emotion recognition is deficient in violent offenders whereas the other components of empathy are rather unaffected. This divergent impact of violent offending on the subcomponents of empathy suggests that all three empathy components can be targeted by therapeutic interventions separately. PMID:24035702
Emotion Recognition Ability: A Multimethod-Multitrait Study.
ERIC Educational Resources Information Center
Gaines, Margie; And Others
A common paradigm in measuring the ability to recognize facial expressions of emotion is to present photographs of facial expressions and to ask subjects to identify the emotion. The Affect Blend Test (ABT) uses this method of assessment and is scored for accuracy on specific affects as well as total accuracy. Another method of measuring affect…
Scotland, Jennifer L; McKenzie, Karen; Cossar, Jill; Murray, Aja; Michie, Amanda
2016-01-01
This study aimed to evaluate the emotion recognition abilities of adults (n=23) with an intellectual disability (ID) compared with a control group of children (n=23) without ID matched for estimated cognitive ability. The study examined the impact of: task paradigm, stimulus type and preferred processing style (global/local) on accuracy. We found that, after controlling for estimated cognitive ability, the control group performed significantly better than the individuals with ID. This provides some support for the emotion specificity hypothesis. Having a more local processing style did not significantly mediate the relation between having ID and emotion recognition, but did significantly predict emotion recognition ability after controlling for group. This suggests that processing style is related to emotion recognition independently of having ID. The availability of contextual information improved emotion recognition for people with ID when compared with line drawing stimuli, and identifying a target emotion from a choice of two was relatively easier for individuals with ID, compared with the other task paradigms. The results of the study are considered in the context of current theories of emotion recognition deficits in individuals with ID. Copyright © 2015 Elsevier Ltd. All rights reserved.
Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
2013-09-01
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.
Holding, Benjamin C; Laukka, Petri; Fischer, Håkan; Bänziger, Tanja; Axelsson, John; Sundelin, Tina
2017-11-01
Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Associations between facial emotion recognition and young adolescents’ behaviors in bullying
Gini, Gianluca; Altoè, Gianmarco
2017-01-01
This study investigated whether different behaviors young adolescents can act during bullying episodes were associated with their ability to recognize morphed facial expressions of the six basic emotions, expressed at high and low intensity. The sample included 117 middle-school students (45.3% girls; mean age = 12.4 years) who filled in a peer nomination questionnaire and individually performed a computerized emotion recognition task. Bayesian generalized mixed-effects models showed a complex picture, in which type and intensity of emotions, students’ behavior and gender interacted in explaining recognition accuracy. Results were discussed with a particular focus on negative emotions and suggesting a “neutral” nature of emotion recognition ability, which does not necessarily lead to moral behavior but can also be used for pursuing immoral goals. PMID:29131871
Facial emotion recognition ability: psychiatry nurses versus nurses from other departments.
Gultekin, Gozde; Kincir, Zeliha; Kurt, Merve; Catal, Yasir; Acil, Asli; Aydin, Aybike; Özcan, Mualla; Delikkaya, Busra N; Kacar, Selma; Emul, Murat
2016-12-01
Facial emotion recognition is a basic element in non-verbal communication. Although some researchers have shown that recognizing facial expressions may be important in the interaction between doctors and patients, there are no studies concerning facial emotion recognition in nurses. Here, we aimed to investigate facial emotion recognition ability in nurses and compare the abilities between nurses from psychiatry and other departments. In this cross-sectional study, sixty seven nurses were divided into two groups according to their departments: psychiatry (n=31); and, other departments (n=36). A Facial Emotion Recognition Test, constructed from a set of photographs from Ekman and Friesen's book "Pictures of Facial Affect", was administered to all participants. In whole group, the highest mean accuracy rate of recognizing facial emotion was the happy (99.14%) while the lowest accurately recognized facial expression was fear (47.71%). There were no significant differences between two groups among mean accuracy rates in recognizing happy, sad, fear, angry, surprised facial emotion expressions (for all, p>0.05). The ability of recognizing disgusted and neutral facial emotions tended to be better in other nurses than psychiatry nurses (p=0.052 and p=0.053, respectively) Conclusion: This study was the first that revealed indifference in the ability of FER between psychiatry nurses and non-psychiatry nurses. In medical education curricula throughout the world, no specific training program is scheduled for recognizing emotional cues of patients. We considered that improving the ability of recognizing facial emotion expression in medical stuff might be beneficial in reducing inappropriate patient-medical stuff interaction.
NASA Astrophysics Data System (ADS)
Poock, G. K.; Martin, B. J.
1984-02-01
This was an applied investigation examining the ability of a speech recognition system to recognize speakers' inputs when the speakers were under different stress levels. Subjects were asked to speak to a voice recognition system under three conditions: (1) normal office environment, (2) emotional stress, and (3) perceptual-motor stress. Results indicate a definite relationship between voice recognition system performance and the type of low stress reference patterns used to achieve recognition.
A study of speech emotion recognition based on hybrid algorithm
NASA Astrophysics Data System (ADS)
Zhu, Ju-xia; Zhang, Chao; Lv, Zhao; Rao, Yao-quan; Wu, Xiao-pei
2011-10-01
To effectively improve the recognition accuracy of the speech emotion recognition system, a hybrid algorithm which combines Continuous Hidden Markov Model (CHMM), All-Class-in-One Neural Network (ACON) and Support Vector Machine (SVM) is proposed. In SVM and ACON methods, some global statistics are used as emotional features, while in CHMM method, instantaneous features are employed. The recognition rate by the proposed method is 92.25%, with the rejection rate to be 0.78%. Furthermore, it obtains the relative increasing of 8.53%, 4.69% and 0.78% compared with ACON, CHMM and SVM methods respectively. The experiment result confirms the efficiency of distinguishing anger, happiness, neutral and sadness emotional states.
Sully, K; Sonuga-Barke, E J S; Fairchild, G
2015-07-01
There is accumulating evidence of impairments in facial emotion recognition in adolescents with conduct disorder (CD). However, the majority of studies in this area have only been able to demonstrate an association, rather than a causal link, between emotion recognition deficits and CD. To move closer towards understanding the causal pathways linking emotion recognition problems with CD, we studied emotion recognition in the unaffected first-degree relatives of CD probands, as well as those with a diagnosis of CD. Using a family-based design, we investigated facial emotion recognition in probands with CD (n = 43), their unaffected relatives (n = 21), and healthy controls (n = 38). We used the Emotion Hexagon task, an alternative forced-choice task using morphed facial expressions depicting the six primary emotions, to assess facial emotion recognition accuracy. Relative to controls, the CD group showed impaired recognition of anger, fear, happiness, sadness and surprise (all p < 0.005). Similar to probands with CD, unaffected relatives showed deficits in anger and happiness recognition relative to controls (all p < 0.008), with a trend toward a deficit in fear recognition. There were no significant differences in performance between the CD probands and the unaffected relatives following correction for multiple comparisons. These results suggest that facial emotion recognition deficits are present in adolescents who are at increased familial risk for developing antisocial behaviour, as well as those who have already developed CD. Consequently, impaired emotion recognition appears to be a viable familial risk marker or candidate endophenotype for CD.
The robustness of false memory for emotional pictures.
Bessette-Symons, Brandy A
2018-02-01
Emotional material is commonly reported to be more accurately recognised; however, there is substantial evidence of increased false alarm rates (FAR) for emotional material and several reports of stronger influences on response bias than accuracy. This pattern is more frequently reported for words than pictures. Research on the mechanisms underlying bias differences has mostly focused on word lists under short retention intervals. This article presents four series of experiments examining recognition memory for emotional pictures while varying arousal and the control over the content of the pictures at two retention intervals, and one study measuring the relatedness of the series picture sets. Under the shorter retention interval, emotion increased false alarms and reduced accuracy. Under the longer retention interval emotion increased hit rates and FAR, resulting in reduced accuracy and/or bias. At both retention intervals, the pattern of valence effects differed based on the arousal associated with the picture sets. Emotional pictures were found to be more related than neutral pictures in each set; however, the influence of relatedness alone does not provide an adequate explanation for all emotional differences. The results demonstrate substantial emotional differences in picture recognition that vary based on valence, arousal and retention interval.
Oxytocin improves emotion recognition for older males.
Campbell, Anna; Ruffman, Ted; Murray, Janice E; Glue, Paul
2014-10-01
Older adults (≥60 years) perform worse than young adults (18-30 years) when recognizing facial expressions of emotion. The hypothesized cause of these changes might be declines in neurotransmitters that could affect information processing within the brain. In the present study, we examined the neuropeptide oxytocin that functions to increase neurotransmission. Research suggests that oxytocin benefits the emotion recognition of less socially able individuals. Men tend to have lower levels of oxytocin and older men tend to have worse emotion recognition than older women; therefore, there is reason to think that older men will be particularly likely to benefit from oxytocin. We examined this idea using a double-blind design, testing 68 older and 68 young adults randomly allocated to receive oxytocin nasal spray (20 international units) or placebo. Forty-five minutes afterward they completed an emotion recognition task assessing labeling accuracy for angry, disgusted, fearful, happy, neutral, and sad faces. Older males receiving oxytocin showed improved emotion recognition relative to those taking placebo. No differences were found for older females or young adults. We hypothesize that oxytocin facilitates emotion recognition by improving neurotransmission in the group with the worst emotion recognition. Copyright © 2014 Elsevier Inc. All rights reserved.
Gaze Dynamics in the Recognition of Facial Expressions of Emotion.
Barabanschikov, Vladimir A
2015-01-01
We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.
The influence of emotion on keyboard typing: an experimental study using visual stimuli.
Lee, Po-Ming; Tsui, Wei-Hsuan; Hsiao, Tzu-Chien
2014-06-20
Emotion recognition technology plays the essential role of enhancement in Human-Computer Interaction (HCI). In recent years, a novel approach for emotion recognition has been reported, which is by keystroke dynamics. This approach can be considered to be rather desirable in HCI because the data used is rather non-intrusive and easy to obtain. However, there were only limited investigations about the phenomenon itself in previous studies. This study aims to examine the source of variance in keystroke typing patterns caused by emotions. A controlled experiment to collect subjects' keystroke data in different emotional states induced by International Affective Picture System (IAPS) was conducted. Two-way Valence (3) × Arousal (3) ANOVAs were used to examine the collected dataset. The results of the experiment indicate that the effect of emotion is significant (p<.001) in the keystroke duration, keystroke latency, and accuracy rate of the keyboard typing. However, the size of the emotional effect is small, compare to the individual variability. Our findings support the conclusion that the keystroke duration, keystroke latency, and also the accuracy rate of typing, are influenced by emotional states. Notably, the finding about the size of effect suggests that the accuracy rate of the emotion recognition could be further improved if personalized models are utilized. On the other hand, the finding also provides an explanation of why real-world applications which authenticate the identity of users by monitoring keystrokes may not be interfered by the emotional states of users. The experiment was conducted using standard instruments and hence is expected to be highly reproducible.
Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.
Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted
2017-07-01
Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Yang, Hao; Zhang, Junran; Jiang, Xiaomei; Liu, Fei
2018-04-01
In recent years, with the rapid development of machine learning techniques,the deep learning algorithm has been widely used in one-dimensional physiological signal processing. In this paper we used electroencephalography (EEG) signals based on deep belief network (DBN) model in open source frameworks of deep learning to identify emotional state (positive, negative and neutrals), then the results of DBN were compared with support vector machine (SVM). The EEG signals were collected from the subjects who were under different emotional stimuli, and DBN and SVM were adopted to identify the EEG signals with changes of different characteristics and different frequency bands. We found that the average accuracy of differential entropy (DE) feature by DBN is 89.12%±6.54%, which has a better performance than previous research based on the same data set. At the same time, the classification effects of DBN are better than the results from traditional SVM (the average classification accuracy of 84.2%±9.24%) and its accuracy and stability have a better trend. In three experiments with different time points, single subject can achieve the consistent results of classification by using DBN (the mean standard deviation is1.44%), and the experimental results show that the system has steady performance and good repeatability. According to our research, the characteristic of DE has a better classification result than other characteristics. Furthermore, the Beta band and the Gamma band in the emotional recognition model have higher classification accuracy. To sum up, the performances of classifiers have a promotion by using the deep learning algorithm, which has a reference for establishing a more accurate system of emotional recognition. Meanwhile, we can trace through the results of recognition to find out the brain regions and frequency band that are related to the emotions, which can help us to understand the emotional mechanism better. This study has a high academic value and practical significance, so further investigation still needs to be done.
Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain.
Zhuang, Ning; Zeng, Ying; Tong, Li; Zhang, Chi; Zhang, Hanming; Yan, Bin
2017-01-01
This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.
Mirandola, Chiara; Toffalini, Enrico; Grassano, Massimo; Cornoldi, Cesare; Melinder, Annika
2014-01-01
The present experiment was conducted to investigate whether negative emotionally charged and arousing content of to-be-remembered scripted material would affect propensity towards memory distortions. We further investigated whether elaboration of the studied material through free recall would affect the magnitude of memory errors. In this study participants saw eight scripts. Each of the scripts included an effect of an action, the cause of which was not presented. Effects were either negatively emotional or neutral. Participants were assigned to either a yes/no recognition test group (recognition), or to a recall and yes/no recognition test group (elaboration + recognition). Results showed that participants in the recognition group produced fewer memory errors in the emotional condition. Conversely, elaboration + recognition participants had lower accuracy and produced more emotional memory errors than the other group, suggesting a mediating role of semantic elaboration on the generation of false memories. The role of emotions and semantic elaboration on the generation of false memories is discussed.
Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.
Sex differences in facial emotion recognition across varying expression intensity levels from videos
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674
Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns
Noh, Soo Rim; Isaacowitz, Derek M.
2014-01-01
While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713
Emotion Recognition in Preschool Children: Associations with Maternal Depression and Early Parenting
Kujawa, Autumn; Dougherty, Lea; Durbin, C. Emily; Laptook, Rebecca; Torpey, Dana; Klein, Daniel N.
2013-01-01
Emotion knowledge in childhood has been shown to predict social functioning and psychological well-being, but relatively little is known about parental factors that influence its development in early childhood. There is some evidence that both parenting behavior and maternal depression are associated with emotion recognition, but previous research has only examined these factors independently. The current study assessed auditory and visual emotion recognition ability among a large sample of preschool children to examine typical emotion recognition skills in children of this age, as well as the independent and interactive effects of maternal and paternal depression and negative parenting (i.e., hostility and intrusiveness). Results indicated that children were most accurate at identifying happy emotional expressions, followed by other basic emotions. The lowest accuracy was observed for neutral expressions. A significant interaction was found between maternal depression and negative parenting behavior, such that children with a maternal history of depression were particularly sensitive to the negative effects of maladaptive parenting behavior on emotion recognition ability. No significant effects were found for paternal depression. These results highlight the importance of examining the effects of multiple interacting factors on children’s emotional development, and provide suggestions for identifying children for targeted preventive interventions. PMID:24444174
Morgenthaler, Jarste; Wiesner, Christian D; Hinze, Karoline; Abels, Lena C; Prehn-Kristensen, Alexander; Göder, Robert
2014-01-01
Sleep enhances memory consolidation and it has been hypothesized that rapid eye movement (REM) sleep in particular facilitates the consolidation of emotional memory. The aim of this study was to investigate this hypothesis using selective REM-sleep deprivation. We used a recognition memory task in which participants were shown negative and neutral pictures. Participants (N=29 healthy medical students) were separated into two groups (undisturbed sleep and selective REM-sleep deprived). Both groups also worked on the memory task in a wake condition. Recognition accuracy was significantly better for negative than for neutral stimuli and better after the sleep than the wake condition. There was, however, no difference in the recognition accuracy (neutral and emotional) between the groups. In summary, our data suggest that REM-sleep deprivation was successful and that the resulting reduction of REM-sleep had no influence on memory consolidation whatsoever.
Martin-Key, N; Brown, T; Fairchild, G
2017-10-01
Adolescents with disruptive behavior disorders are reported to show deficits in empathy and emotion recognition. However, prior studies have mainly used questionnaires to measure empathy or experimental paradigms that are lacking in ecological validity. We used an empathic accuracy (EA) task to study EA, emotion recognition, and affective empathy in 77 male adolescents aged 13-18 years: 37 with Conduct Disorder (CD) and 40 typically-developing controls. The CD sample was divided into higher callous-emotional traits (CD/CU+) and lower callous-unemotional traits (CD/CU-) subgroups using a median split. Participants watched films of actors recalling happy, sad, surprised, angry, disgusted or fearful autobiographical experiences and provided continuous ratings of emotional intensity (assessing EA), as well as naming the emotion (recognition) and reporting the emotion they experienced themselves (affective empathy). The CD and typically-developing groups did not significantly differ in EA and there were also no differences between the CD/CU+ and CD/CU- subgroups. Participants with CD were significantly less accurate than controls in recognizing sadness, fear, and disgust, all ps < 0.050, rs ≥ 0.30, whilst the CD/CU- and CD/CU+ subgroups did not differ in emotion recognition. Participants with CD also showed affective empathy deficits for sadness, fear, and disgust relative to controls, all ps < 0.010, rs ≥ 0.33, whereas the CD/CU+ and CD/CU- subgroups did not differ in affective empathy. These results extend prior research by demonstrating affective empathy and emotion recognition deficits in adolescents with CD using a more ecologically-valid task, and challenge the view that affective empathy deficits are specific to CD/CU+.
Altered brain mechanisms of emotion processing in pre-manifest Huntington's disease.
Novak, Marianne J U; Warren, Jason D; Henley, Susie M D; Draganski, Bogdan; Frackowiak, Richard S; Tabrizi, Sarah J
2012-04-01
Huntington's disease is an inherited neurodegenerative disease that causes motor, cognitive and psychiatric impairment, including an early decline in ability to recognize emotional states in others. The pathophysiology underlying the earliest manifestations of the disease is not fully understood; the objective of our study was to clarify this. We used functional magnetic resonance imaging to investigate changes in brain mechanisms of emotion recognition in pre-manifest carriers of the abnormal Huntington's disease gene (subjects with pre-manifest Huntington's disease): 16 subjects with pre-manifest Huntington's disease and 14 control subjects underwent 1.5 tesla magnetic resonance scanning while viewing pictures of facial expressions from the Ekman and Friesen series. Disgust, anger and happiness were chosen as emotions of interest. Disgust is the emotion in which recognition deficits have most commonly been detected in Huntington's disease; anger is the emotion in which impaired recognition was detected in the largest behavioural study of emotion recognition in pre-manifest Huntington's disease to date; and happiness is a positive emotion to contrast with disgust and anger. Ekman facial expressions were also used to quantify emotion recognition accuracy outside the scanner and structural magnetic resonance imaging with voxel-based morphometry was used to assess the relationship between emotion recognition accuracy and regional grey matter volume. Emotion processing in pre-manifest Huntington's disease was associated with reduced neural activity for all three emotions in partially separable functional networks. Furthermore, the Huntington's disease-associated modulation of disgust and happiness processing was negatively correlated with genetic markers of pre-manifest disease progression in distributed, largely extrastriatal networks. The modulated disgust network included insulae, cingulate cortices, pre- and postcentral gyri, precunei, cunei, bilateral putamena, right pallidum, right thalamus, cerebellum, middle frontal, middle occipital, right superior and left inferior temporal gyri, and left superior parietal lobule. The modulated happiness network included postcentral gyri, left caudate, right cingulate cortex, right superior and inferior parietal lobules, and right superior frontal, middle temporal, middle occipital and precentral gyri. These effects were not driven merely by striatal dysfunction. We did not find equivalent associations between brain structure and emotion recognition, and the pre-manifest Huntington's disease cohort did not have a behavioural deficit in out-of-scanner emotion recognition relative to controls. In addition, we found increased neural activity in the pre-manifest subjects in response to all three emotions in frontal regions, predominantly in the middle frontal gyri. Overall, these findings suggest that pathophysiological effects of Huntington's disease may precede the development of overt clinical symptoms and detectable cerebral atrophy.
Comparing Emotion Recognition Skills among Children with and without Jailed Parents.
Hindt, Lauren A; Davis, Laurel; Schubert, Erin C; Poehlmann-Tynan, Julie; Shlafer, Rebecca J
2016-01-01
Approximately five million children in the United States have experienced a co-resident parent's incarceration in jail or prison. Parental incarceration is associated with multiple risk factors for maladjustment, which may contribute to the increased likelihood of behavioral problems in this population. Few studies have examined early predictors of maladjustment among children with incarcerated parents, limiting scholars' understanding about potential points for prevention and intervention. Emotion recognition skills may play a role in the development of maladjustment and may be amenable to intervention. The current study examined whether emotion recognition skills differed between 3- to 8-year-old children with and without jailed parents. We hypothesized that children with jailed parents would have a negative bias in processing emotions and less accuracy compared to children without incarcerated parents. Data were drawn from 128 families, including 75 children (53.3% male, M = 5.37 years) with jailed parents and 53 children (39.6% male, M = 5.02 years) without jailed parents. Caregivers in both samples provided demographic information. Children performed an emotion recognition task in which they were asked to produce a label for photos expressing six different emotions (i.e., happy, surprised, neutral, sad, angry, and fearful). For scoring, the number of positive and negative labels were totaled; the number of negative labels provided for neutral and positive stimuli were totaled (measuring negative bias/overextension of negative labels); and valence accuracy (i.e., positive, negative, and neutral) and label accuracy were calculated. Results indicated a main effect of parental incarceration on the number of positive labels provided; children with jailed parents presented significantly fewer positive emotions than the comparison group. There was also a main effect of parental incarceration on negative bias (the overextension of negative labels); children with jailed parents had a negative bias compared to children without jailed parents. However, these findings did not hold when controlling for child age, race/ethnicity, receipt of special education services, and caregiver education. The results provide some evidence for the effect of the context of parental incarceration in the development of negative emotion recognition biases. Limitations and implications for future research and interventions are discussed.
Comparing Emotion Recognition Skills among Children with and without Jailed Parents
Hindt, Lauren A.; Davis, Laurel; Schubert, Erin C.; Poehlmann-Tynan, Julie; Shlafer, Rebecca J.
2016-01-01
Approximately five million children in the United States have experienced a co-resident parent’s incarceration in jail or prison. Parental incarceration is associated with multiple risk factors for maladjustment, which may contribute to the increased likelihood of behavioral problems in this population. Few studies have examined early predictors of maladjustment among children with incarcerated parents, limiting scholars’ understanding about potential points for prevention and intervention. Emotion recognition skills may play a role in the development of maladjustment and may be amenable to intervention. The current study examined whether emotion recognition skills differed between 3- to 8-year-old children with and without jailed parents. We hypothesized that children with jailed parents would have a negative bias in processing emotions and less accuracy compared to children without incarcerated parents. Data were drawn from 128 families, including 75 children (53.3% male, M = 5.37 years) with jailed parents and 53 children (39.6% male, M = 5.02 years) without jailed parents. Caregivers in both samples provided demographic information. Children performed an emotion recognition task in which they were asked to produce a label for photos expressing six different emotions (i.e., happy, surprised, neutral, sad, angry, and fearful). For scoring, the number of positive and negative labels were totaled; the number of negative labels provided for neutral and positive stimuli were totaled (measuring negative bias/overextension of negative labels); and valence accuracy (i.e., positive, negative, and neutral) and label accuracy were calculated. Results indicated a main effect of parental incarceration on the number of positive labels provided; children with jailed parents presented significantly fewer positive emotions than the comparison group. There was also a main effect of parental incarceration on negative bias (the overextension of negative labels); children with jailed parents had a negative bias compared to children without jailed parents. However, these findings did not hold when controlling for child age, race/ethnicity, receipt of special education services, and caregiver education. The results provide some evidence for the effect of the context of parental incarceration in the development of negative emotion recognition biases. Limitations and implications for future research and interventions are discussed. PMID:27504101
Chen, Jing; Hu, Bin; Wang, Yue; Moore, Philip; Dai, Yongqiang; Feng, Lei; Ding, Zhijie
2017-12-20
Collaboration between humans and computers has become pervasive and ubiquitous, however current computer systems are limited in that they fail to address the emotional component. An accurate understanding of human emotions is necessary for these computers to trigger proper feedback. Among multiple emotional channels, physiological signals are synchronous with emotional responses; therefore, analyzing physiological changes is a recognized way to estimate human emotions. In this paper, a three-stage decision method is proposed to recognize four emotions based on physiological signals in the multi-subject context. Emotion detection is achieved by using a stage-divided strategy in which each stage deals with a fine-grained goal. The decision method consists of three stages. During the training process, the initial stage transforms mixed training subjects to separate groups, thus eliminating the effect of individual differences. The second stage categorizes four emotions into two emotion pools in order to reduce recognition complexity. The third stage trains a classifier based on emotions in each emotion pool. During the testing process, a test case or test trial will be initially classified to a group followed by classification into an emotion pool in the second stage. An emotion will be assigned to the test trial in the final stage. In this paper we consider two different ways of allocating four emotions into two emotion pools. A comparative analysis is also carried out between the proposal and other methods. An average recognition accuracy of 77.57% was achieved on the recognition of four emotions with the best accuracy of 86.67% to recognize the positive and excited emotion. Using differing ways of allocating four emotions into two emotion pools, we found there is a difference in the effectiveness of a classifier on learning each emotion. When compared to other methods, the proposed method demonstrates a significant improvement in recognizing four emotions in the multi-subject context. The proposed three-stage decision method solves a crucial issue which is 'individual differences' in multi-subject emotion recognition and overcomes the suboptimal performance with respect to direct classification of multiple emotions. Our study supports the observation that the proposed method represents a promising methodology for recognizing multiple emotions in the multi-subject context.
Kujawa, Autumn; Dougherty, Lea; Durbin, C Emily; Laptook, Rebecca; Torpey, Dana; Klein, Daniel N
2014-02-01
Emotion knowledge in childhood has been shown to predict social functioning and psychological well-being, but relatively little is known about parental factors that influence its development in early childhood. There is some evidence that both parenting behavior and maternal depression are associated with emotion recognition, but previous research has only examined these factors independently. The current study assessed auditory and visual emotion recognition ability among a large sample of preschool children to examine typical emotion recognition skills in children of this age, as well as the independent and interactive effects of maternal and paternal depression and negative parenting (i.e., hostility and intrusiveness). Results indicated that children were most accurate at identifying happy emotional expressions. The lowest accuracy was observed for neutral expressions. A significant interaction was found between maternal depression and negative parenting behavior: children with a maternal history of depression were particularly sensitive to the negative effects of maladaptive parenting behavior on emotion recognition ability. No significant effects were found for paternal depression. These results highlight the importance of examining the effects of multiple interacting factors on children's emotional development and provide suggestions for identifying children for targeted preventive interventions.
On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information
NASA Astrophysics Data System (ADS)
Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.
Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.
McCubbin, James A; Loveless, James P; Graham, Jack G; Hall, Gabrielle A; Bart, Ryan M; Moore, DeWayne D; Merritt, Marcellus M; Lane, Richard D; Thayer, Julian F
2014-02-01
Persons with higher blood pressure have emotional dampening in some contexts. This may reflect interactive changes in central nervous system control of affect and autonomic function in the early stages of hypertension development. The purpose of this study is to determine the independence of cardiovascular emotional dampening from alexithymia to better understand the role of affect dysregulation in blood pressure elevations. Ninety-six normotensives were assessed for resting systolic and diastolic (DBP) blood pressure, recognition of emotions in faces and sentences using the Perception of Affect Task (PAT), alexithymia, anxiety, and defensiveness. Resting DBP significantly predicted PAT emotion recognition accuracy in men after adjustment for age, self-reported affect, and alexithymia. Cardiovascular emotional dampening is independent of alexithymia and affect in men. Dampened emotion recognition could potentially influence interpersonal communication and psychosocial distress, thereby further contributing to BP dysregulation and increased cardiovascular risk.
Influence of emotional expression on memory recognition bias in schizophrenia as revealed by fMRI.
Sergerie, Karine; Armony, Jorge L; Menear, Matthew; Sutton, Hazel; Lepage, Martin
2010-07-01
We recently showed that, in healthy individuals, emotional expression influences memory for faces both in terms of accuracy and, critically, in memory response bias (tendency to classify stimuli as previously seen or not, regardless of whether this was the case). Although schizophrenia has been shown to be associated with deficit in episodic memory and emotional processing, the relation between these processes in this population remains unclear. Here, we used our previously validated paradigm to directly investigate the modulation of emotion on memory recognition. Twenty patients with schizophrenia and matched healthy controls completed functional magnetic resonance imaging (fMRI) study of recognition memory of happy, sad, and neutral faces. Brain activity associated with the response bias was obtained by correlating this measure with the contrast subjective old (ie, hits and false alarms) minus subjective new (misses and correct rejections) for sad and happy expressions. Although patients exhibited an overall lower memory performance than controls, they showed the same effects of emotion on memory, both in terms of accuracy and bias. For sad faces, the similar behavioral pattern between groups was mirrored by a largely overlapping neural network, mostly involved in familiarity-based judgments (eg, parahippocampal gyrus). In contrast, controls activated a much larger set of regions for happy faces, including areas thought to underlie recollection-based memory retrieval (eg, superior frontal gyrus and hippocampus) and in novelty detection (eg, amygdala). This study demonstrates that, despite an overall lower memory accuracy, emotional memory is intact in schizophrenia, although emotion-specific differences in brain activation exist, possibly reflecting different strategies.
An approach to emotion recognition in single-channel EEG signals: a mother child interaction
NASA Astrophysics Data System (ADS)
Gómez, A.; Quintero, L.; López, N.; Castro, J.
2016-04-01
In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology. Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains. Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness.
Emotion categorization of body expressions in narrative scenarios
Volkova, Ekaterina P.; Mohler, Betty J.; Dodds, Trevor J.; Tesch, Joachim; Bülthoff, Heinrich H.
2014-01-01
Humans can recognize emotions expressed through body motion with high accuracy even when the stimuli are impoverished. However, most of the research on body motion has relied on exaggerated displays of emotions. In this paper we present two experiments where we investigated whether emotional body expressions could be recognized when they were recorded during natural narration. Our actors were free to use their entire body, face, and voice to express emotions, but our resulting visual stimuli used only the upper body motion trajectories in the form of animated stick figures. Observers were asked to perform an emotion recognition task on short motion sequences using a large and balanced set of emotions (amusement, joy, pride, relief, surprise, anger, disgust, fear, sadness, shame, and neutral). Even with only upper body motion available, our results show recognition accuracy significantly above chance level and high consistency rates among observers. In our first experiment, that used more classic emotion induction setup, all emotions were well recognized. In the second study that employed narrations, four basic emotion categories (joy, anger, fear, and sadness), three non-basic emotion categories (amusement, pride, and shame) and the “neutral” category were recognized above chance. Interestingly, especially in the second experiment, observers showed a bias toward anger when recognizing the motion sequences for emotions. We discovered that similarities between motion sequences across the emotions along such properties as mean motion speed, number of peaks in the motion trajectory and mean motion span can explain a large percent of the variation in observers' responses. Overall, our results show that upper body motion is informative for emotion recognition in narrative scenarios. PMID:25071623
Corcoran, C M; Keilp, J G; Kayser, J; Klim, C; Butler, P D; Bruder, G E; Gur, R C; Javitt, D C
2015-10-01
Schizophrenia is characterized by profound and disabling deficits in the ability to recognize emotion in facial expression and tone of voice. Although these deficits are well documented in established schizophrenia using recently validated tasks, their predictive utility in at-risk populations has not been formally evaluated. The Penn Emotion Recognition and Discrimination tasks, and recently developed measures of auditory emotion recognition, were administered to 49 clinical high-risk subjects prospectively followed for 2 years for schizophrenia outcome, and 31 healthy controls, and a developmental cohort of 43 individuals aged 7-26 years. Deficit in emotion recognition in at-risk subjects was compared with deficit in established schizophrenia, and with normal neurocognitive growth curves from childhood to early adulthood. Deficits in emotion recognition significantly distinguished at-risk patients who transitioned to schizophrenia. By contrast, more general neurocognitive measures, such as attention vigilance or processing speed, were non-predictive. The best classification model for schizophrenia onset included both face emotion processing and negative symptoms, with accuracy of 96%, and area under the receiver-operating characteristic curve of 0.99. In a parallel developmental study, emotion recognition abilities were found to reach maturity prior to traditional age of risk for schizophrenia, suggesting they may serve as objective markers of early developmental insult. Profound deficits in emotion recognition exist in at-risk patients prior to schizophrenia onset. They may serve as an index of early developmental insult, and represent an effective target for early identification and remediation. Future studies investigating emotion recognition deficits at both mechanistic and predictive levels are strongly encouraged.
Empathy costs: Negative emotional bias in high empathisers.
Chikovani, George; Babuadze, Lasha; Iashvili, Nino; Gvalia, Tamar; Surguladze, Simon
2015-09-30
Excessive empathy has been associated with compassion fatigue in health professionals and caregivers. We investigated an effect of empathy on emotion processing in 137 healthy individuals of both sexes. We tested a hypothesis that high empathy may underlie increased sensitivity to negative emotion recognition which may interact with gender. Facial emotion stimuli comprised happy, angry, fearful, and sad faces presented at different intensities (mild and prototypical) and different durations (500ms and 2000ms). The parameters of emotion processing were represented by discrimination accuracy, response bias and reaction time. We found that higher empathy was associated with better recognition of all emotions. We also demonstrated that higher empathy was associated with response bias towards sad and fearful faces. The reaction time analysis revealed that higher empathy in females was associated with faster (compared with males) recognition of mildly sad faces of brief duration. We conclude that although empathic abilities were providing for advantages in recognition of all facial emotional expressions, the bias towards emotional negativity may potentially carry a risk for empathic distress. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M.; Begeer, Sander
2014-01-01
The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion…
Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?
Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James
2017-01-01
Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives.
Sasson, Noah J; Pinkham, Amy E; Weittenhiller, Lauren P; Faso, Daniel J; Simpson, Claire
2016-05-01
Although Schizophrenia (SCZ) and Autism Spectrum Disorder (ASD) share impairments in emotion recognition, the mechanisms underlying these impairments may differ. The current study used the novel "Emotions in Context" task to examine how the interpretation and visual inspection of facial affect is modulated by congruent and incongruent emotional contexts in SCZ and ASD. Both adults with SCZ (n= 44) and those with ASD (n= 21) exhibited reduced affect recognition relative to typically-developing (TD) controls (n= 39) when faces were integrated within broader emotional scenes but not when they were presented in isolation, underscoring the importance of using stimuli that better approximate real-world contexts. Additionally, viewing faces within congruent emotional scenes improved accuracy and visual attention to the face for controls more so than the clinical groups, suggesting that individuals with SCZ and ASD may not benefit from the presence of complementary emotional information as readily as controls. Despite these similarities, important distinctions between SCZ and ASD were found. In every condition, IQ was related to emotion-recognition accuracy for the SCZ group but not for the ASD or TD groups. Further, only the ASD group failed to increase their visual attention to faces in incongruent emotional scenes, suggesting a lower reliance on facial information within ambiguous emotional contexts relative to congruent ones. Collectively, these findings highlight both shared and distinct social cognitive processes in SCZ and ASD that may contribute to their characteristic social disabilities. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Williams, Beth T; Gray, Kylie M
2013-11-01
This study assessed the relationship between emotion recognition ability and social skills in 42 young children with autistic disorder aged 4-7 years. The analyses revealed that accuracy in recognition of sadness, but not happiness, anger or fear, was associated with higher ratings on the Vineland-II Socialization domain, above and beyond the influence of chronological age, cognitive ability and autism symptom severity. These findings extend previous research with adolescents and adults with autism spectrum disorders, suggesting that sadness recognition is also associated with social skills in children with autism.
Attenuated sensitivity to the emotions of others by insular lesion
Terasawa, Yuri; Kurosaki, Yoshiko; Ibata, Yukio; Moriguchi, Yoshiya; Umeda, Satoshi
2015-01-01
The insular cortex has been considered to be the neural base of visceral sensation for many years. Previous studies in psychology and cognitive neuroscience have accumulated evidence indicating that interoception is an essential factor in the subjective feeling of emotion. Recent neuroimaging studies have demonstrated that anterior insular cortex activation is associated with accessing interoceptive information and underpinning the subjective experience of emotional state. Only a small number of studies have focused on the influence of insular damage on emotion processing and interoceptive awareness. Moreover, disparate hypotheses have been proposed for the alteration of emotion processing by insular lesions. Some studies show that insular lesions yield an inability for understanding and representing disgust exclusively, but other studies suggest that such lesions modulate arousal and valence judgments for both positive and negative emotions. In this study, we examined the alteration in emotion recognition in three right insular and adjacent area damaged cases with well-preserved higher cognitive function. Participants performed an experimental task using morphed photos that ranged between neutral and emotional facial expressions (i.e., anger, sadness, disgust, and happiness). Recognition rates of particular emotions were calculated to measure emotional sensitivity. In addition, they performed heartbeat perception task for measuring interoceptive accuracy. The cases identified emotions that have high arousal level (e.g., anger) as less aroused emotions (e.g., sadness) and a case showed remarkably low interoceptive accuracy. The current results show that insular lesions lead to attenuated emotional sensitivity across emotions, rather than category-specific impairments such as to disgust. Despite the small number of cases, our findings suggest that the insular cortex modulates recognition of emotional saliency and mediates interoceptive and emotional awareness. PMID:26388817
Children's understanding of facial expression of emotion: II. Drawing of emotion-faces.
Missaghi-Lakshman, M; Whissell, C
1991-06-01
67 children from Grades 2, 4, and 7 drew faces representing the emotional expressions of fear, anger, surprise, disgust, happiness, and sadness. The children themselves and 29 adults later decoded the drawings in an emotion-recognition task. Children were the more accurate decoders, and their accuracy and the accuracy of adults increased significantly for judgments of 7th-grade drawings. The emotions happy and sad were most accurately decoded. There were no significant differences associated with sex. In their drawings, children utilized a symbol system that seems to be based on a highlighting or exaggeration of features of the innately governed facial expression of emotion.
Kempnich, Clare L; Wong, Dana; Georgiou-Karistianis, Nellie; Stout, Julie C
2017-04-01
Deficits in the recognition of negative emotions emerge before clinical diagnosis in Huntington's disease (HD). To address emotion recognition deficits, which have been shown in schizophrenia to be improved by computerized training, we conducted a study of the feasibility and efficacy of computerized training of emotion recognition in HD. We randomly assigned 22 individuals with premanifest or early symptomatic HD to the training or control group. The training group used a self-guided online training program, MicroExpression Training Tool (METT), twice weekly for 4 weeks. All participants completed measures of emotion recognition at baseline and post-training time-points. Participants in the training group also completed training adherence measures. Participants in the training group completed seven of the eight sessions on average. Results showed a significant group by time interaction, indicating that METT training was associated with improved accuracy in emotion recognition. Although sample size was small, our study demonstrates that emotion recognition remediation using the METT is feasible in terms of training adherence. The evidence also suggests METT may be effective in premanifest or early-symptomatic HD, opening up a potential new avenue for intervention. Further study with a larger sample size is needed to replicate these findings, and to characterize the durability and generalizability of these improvements, and their impact on functional outcomes in HD. (JINS, 2017, 23, 314-321).
Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven
2016-08-01
Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).
Age- and gender-related variations of emotion recognition in pseudowords and faces.
Demenescu, Liliana R; Mathiak, Krystyna A; Mathiak, Klaus
2014-01-01
BACKGROUND/STUDY CONTEXT: The ability to interpret emotionally salient stimuli is an important skill for successful social functioning at any age. The objective of the present study was to disentangle age and gender effects on emotion recognition ability in voices and faces. Three age groups of participants (young, age range: 18-35 years; middle-aged, age range: 36-55 years; and older, age range: 56-75 years) identified basic emotions presented in voices and faces in a forced-choice paradigm. Five emotions (angry, fearful, sad, disgusted, and happy) and a nonemotional category (neutral) were shown as encoded in color photographs of facial expressions and pseudowords spoken in affective prosody. Overall, older participants had a lower accuracy rate in categorizing emotions than young and middle-aged participants. Females performed better than males in recognizing emotions from voices, and this gender difference emerged in middle-aged and older participants. The performance of emotion recognition in faces was significantly correlated with the performance in voices. The current study provides further evidence for a general age and gender effect on emotion recognition; the advantage of females seems to be age- and stimulus modality-dependent.
Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals.
Zhuang, Ning; Zeng, Ying; Yang, Kai; Zhang, Chi; Tong, Li; Yan, Bin
2018-03-12
Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.
Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals
Zeng, Ying; Yang, Kai; Tong, Li; Yan, Bin
2018-01-01
Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods. PMID:29534515
Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese.
Stanley, Jennifer Tehan; Zhang, Xin; Fung, Helene H; Isaacowitz, Derek M
2013-02-01
We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye-tracking data suggest that, for some emotions, Americans attended more to the target faces, and they made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Cultural Differences in Gaze and Emotion Recognition: Americans Contrast More than Chinese
Tehan Stanley, Jennifer; Zhang, Xin; Fung, Helene H.; Isaacowitz, Derek M.
2014-01-01
We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye tracking data suggest that, for some emotions, Americans attended more to the target faces and made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PMID:22889414
Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J
2018-03-14
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.
Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P
2016-03-01
Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps < 0.05). No differences were found on emotional face recognition between NSSI and SA groups. Secondary analyses showed that compared to inpatients without major depression, those with major depression made fewer errors on adult sad face recognition even when controlling for group status (p < 0.05). Further, compared to inpatients without generalized anxiety, those with generalized anxiety made fewer recognition errors on adult happy faces even when controlling for group status (p < 0.05). Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination of the role of context-dependent emotional processing are needed moving forward.
Neuroticism and facial emotion recognition in healthy adults.
Andric, Sanja; Maric, Nadja P; Knezevic, Goran; Mihaljevic, Marina; Mirjanic, Tijana; Velthorst, Eva; van Os, Jim
2016-04-01
The aim of the present study was to examine whether healthy individuals with higher levels of neuroticism, a robust independent predictor of psychopathology, exhibit altered facial emotion recognition performance. Facial emotion recognition accuracy was investigated in 104 healthy adults using the Degraded Facial Affect Recognition Task (DFAR). Participants' degree of neuroticism was estimated using neuroticism scales extracted from the Eysenck Personality Questionnaire and the Revised NEO Personality Inventory. A significant negative correlation between the degree of neuroticism and the percentage of correct answers on DFAR was found only for happy facial expression (significant after applying Bonferroni correction). Altered sensitivity to the emotional context represents a useful and easy way to obtain cognitive phenotype that correlates strongly with inter-individual variations in neuroticism linked to stress vulnerability and subsequent psychopathology. Present findings could have implication in early intervention strategies and staging models in psychiatry. © 2015 Wiley Publishing Asia Pty Ltd.
The voice conveys emotion in ten globalized cultures and one remote village in Bhutan.
Cordaro, Daniel T; Keltner, Dacher; Tshering, Sumjay; Wangchuk, Dorji; Flynn, Lisa M
2016-02-01
With data from 10 different globalized cultures and 1 remote, isolated village in Bhutan, we examined universals and cultural variations in the recognition of 16 nonverbal emotional vocalizations. College students in 10 nations (Study 1) and villagers in remote Bhutan (Study 2) were asked to match emotional vocalizations to 1-sentence stories of the same valence. Guided by previous conceptualizations of recognition accuracy, across both studies, 7 of the 16 vocal burst stimuli were found to have strong or very strong recognition in all 11 cultures, 6 vocal bursts were found to have moderate recognition, and 4 were not universally recognized. All vocal burst stimuli varied significantly in terms of the degree to which they were recognized across the 11 cultures. Our discussion focuses on the implications of these results for current debates concerning the emotion conveyed in the voice. (c) 2016 APA, all rights reserved).
Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations.
Laukka, Petri; Elfenbein, Hillary Anger; Söder, Nela; Nordström, Henrik; Althoff, Jean; Chui, Wanda; Iraki, Frederick K; Rockstuhl, Thomas; Thingujam, Nutankumar S
2013-01-01
Which emotions are associated with universally recognized non-verbal signals?We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations.
Robust representation and recognition of facial emotions using extreme sparse learning.
Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang
2015-07-01
Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.
Rigon, Arianna; Turkstra, Lyn; Mutlu, Bilge; Duff, Melissa
2016-10-01
Although moderate to severe traumatic brain injury (TBI) leads to facial affect recognition impairments in up to 39% of individuals, protective and risk factors for these deficits are unknown. The aim of the current study was to examine the effect of sex on emotion recognition abilities following TBI. We administered two separate emotion recognition tests (one static and one dynamic) to 53 individuals with moderate to severe TBI (females = 28) and 49 demographically matched comparisons (females = 22). We then investigated the presence of a sex-by-group interaction in emotion recognition accuracy. In the comparison group, there were no sex differences. In the TBI group, however, females significantly outperformed males in the dynamic (but not the static) task. Moreover, males (but not females) with TBI performed significantly worse than comparison participants in the dynamic task. Further analysis revealed that sex differences in emotion recognition abilities within the TBI group could not be explained by lesion location, TBI severity, or other neuropsychological variables. These findings suggest that sex may serve as a protective factor for social impairment following TBI and inform clinicians working with TBI as well as research on the neurophysiological correlates of sex differences in social functioning.
Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola
2015-01-01
We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals. PMID:26557101
Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola
2015-01-01
We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals.
Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition
Rigoulot, Simon; Wassiliwizky, Eugen; Pell, Marc D.
2013-01-01
Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech. PMID:23805115
Emotional prosody processing in autism spectrum disorder
Kliemann, Dorit; Dziobek, Isabel; Heekeren, Hauke R.
2017-01-01
Abstract Individuals with Autism Spectrum Disorder (ASD) are characterized by severe deficits in social communication, whereby the nature of their impairments in emotional prosody processing have yet to be specified. Here, we investigated emotional prosody processing in individuals with ASD and controls with novel, lifelike behavioral and neuroimaging paradigms. Compared to controls, individuals with ASD showed reduced emotional prosody recognition accuracy on a behavioral task. On the neural level, individuals with ASD displayed reduced activity of the STS, insula and amygdala for complex vs basic emotions compared to controls. Moreover, the coupling between the STS and amygdala for complex vs basic emotions was reduced in the ASD group. Finally, groups differed with respect to the relationship between brain activity and behavioral performance. Brain activity during emotional prosody processing was more strongly related to prosody recognition accuracy in ASD participants. In contrast, the coupling between STS and anterior cingulate cortex (ACC) activity predicted behavioral task performance more strongly in the control group. These results provide evidence for aberrant emotional prosody processing of individuals with ASD. They suggest that the differences in the relationship between the neural and behavioral level of individuals with ASD may account for their observed deficits in social communication. PMID:27531389
Age, gender, and puberty influence the development of facial emotion recognition.
Lawrence, Kate; Campbell, Ruth; Skuse, David
2015-01-01
Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.
Age, gender, and puberty influence the development of facial emotion recognition
Lawrence, Kate; Campbell, Ruth; Skuse, David
2015-01-01
Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers. PMID:26136697
A Diffusion Model Analysis of Decision Biases Affecting Delayed Recognition of Emotional Stimuli.
Bowen, Holly J; Spaniol, Julia; Patel, Ronak; Voss, Andreas
2016-01-01
Previous empirical work suggests that emotion can influence accuracy and cognitive biases underlying recognition memory, depending on the experimental conditions. The current study examines the effects of arousal and valence on delayed recognition memory using the diffusion model, which allows the separation of two decision biases thought to underlie memory: response bias and memory bias. Memory bias has not been given much attention in the literature but can provide insight into the retrieval dynamics of emotion modulated memory. Participants viewed emotional pictorial stimuli; half were given a recognition test 1-day later and the other half 7-days later. Analyses revealed that emotional valence generally evokes liberal responding, whereas high arousal evokes liberal responding only at a short retention interval. The memory bias analyses indicated that participants experienced greater familiarity with high-arousal compared to low-arousal items and this pattern became more pronounced as study-test lag increased; positive items evoke greater familiarity compared to negative and this pattern remained stable across retention interval. The findings provide insight into the separate contributions of valence and arousal to the cognitive mechanisms underlying delayed emotion modulated memory.
Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J
2013-12-01
Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.
Coleman, Jonathan R.I.; Lester, Kathryn J.; Keers, Robert; Munafò, Marcus R.; Breen, Gerome
2017-01-01
Emotion recognition is disrupted in many mental health disorders, which may reflect shared genetic aetiology between this trait and these disorders. We explored genetic influences on emotion recognition and the relationship between these influences and mental health phenotypes. Eight‐year‐old participants (n = 4,097) from the Avon Longitudinal Study of Parents and Children (ALSPAC) completed the Diagnostic Analysis of Non‐Verbal Accuracy (DANVA) faces test. Genome‐wide genotype data was available from the Illumina HumanHap550 Quad microarray. Genome‐wide association studies were performed to assess associations with recognition of individual emotions and emotion in general. Exploratory polygenic risk scoring was performed using published genomic data for schizophrenia, bipolar disorder, depression, autism spectrum disorder, anorexia, and anxiety disorders. No individual genetic variants were identified at conventional levels of significance in any analysis although several loci were associated at a level suggestive of significance. SNP‐chip heritability analyses did not identify a heritable component of variance for any phenotype. Polygenic scores were not associated with any phenotype. The effect sizes of variants influencing emotion recognition are likely to be small. Previous studies of emotion identification have yielded non‐zero estimates of SNP‐heritability. This discrepancy is likely due to differences in the measurement and analysis of the phenotype. PMID:28608620
A nonlinear heartbeat dynamics model approach for personalized emotion recognition.
Valenza, Gaetano; Citi, Luca; Lanatà, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2013-01-01
Emotion recognition based on autonomic nervous system signs is one of the ambitious goals of affective computing. It is well-accepted that standard signal processing techniques require relative long-time series of multivariate records to ensure reliability and robustness of recognition and classification algorithms. In this work, we present a novel methodology able to assess cardiovascular dynamics during short-time (i.e. < 10 seconds) affective stimuli, thus overcoming some of the limitations of current emotion recognition approaches. We developed a personalized, fully parametric probabilistic framework based on point-process theory where heartbeat events are modelled using a 2(nd)-order nonlinear autoregressive integrative structure in order to achieve effective performances in short-time affective assessment. Experimental results show a comprehensive emotional characterization of 4 subjects undergoing a passive affective elicitation using a sequence of standardized images gathered from the international affective picture system. Each picture was identified by the IAPS arousal and valence scores as well as by a self-reported emotional label associating a subjective positive or negative emotion. Results show a clear classification of two defined levels of arousal, valence and self-emotional state using features coming from the instantaneous spectrum and bispectrum of the considered RR intervals, reaching up to 90% recognition accuracy.
Sleep facilitates consolidation of emotional declarative memory.
Hu, Peter; Stylos-Allan, Melinda; Walker, Matthew P
2006-10-01
Both sleep and emotion are known to modulate processes of memory consolidation, yet their interaction is poorly understood. We examined the influence of sleep on consolidation of emotionally arousing and neutral declarative memory. Subjects completed an initial study session involving arousing and neutral pictures, either in the evening or in the morning. Twelve hours later, after sleeping or staying awake, subjects performed a recognition test requiring them to discriminate between these original pictures and novel pictures by responding "remember,"know" (familiar), or "new." Selective sleep effects were observed for consolidation of emotional memory: Recognition accuracy for know judgments of arousing stimuli improved by 42% after sleep relative to wake, and recognition bias for remember judgments of these stimuli increased by 58% after sleep relative to wake (resulting in more conservative responding). These findings hold important implications for understanding of human memory processing, suggesting that the facilitation of memory for emotionally salient information may preferentially develop during sleep.
Culture but not gender modulates amygdala activation during explicit emotion recognition.
Derntl, Birgit; Habel, Ute; Robinson, Simon; Windischberger, Christian; Kryspin-Exner, Ilse; Gur, Ruben C; Moser, Ewald
2012-05-29
Mounting evidence indicates that humans have significant difficulties in understanding emotional expressions from individuals of different ethnic backgrounds, leading to reduced recognition accuracy and stronger amygdala activation. However, the impact of gender on the behavioral and neural reactions during the initial phase of cultural assimilation has not been addressed. Therefore, we investigated 24 Asians students (12 females) and 24 age-matched European students (12 females) during an explicit emotion recognition task, using Caucasian facial expressions only, on a high-field MRI scanner. Analysis of functional data revealed bilateral amygdala activation to emotional expressions in Asian and European subjects. However, in the Asian sample, a stronger response of the amygdala emerged and was paralleled by reduced recognition accuracy, particularly for angry male faces. Moreover, no significant gender difference emerged. We also observed a significant inverse correlation between duration of stay and amygdala activation. In this study we investigated the "alien-effect" as an initial problem during cultural assimilation and examined this effect on a behavioral and neural level. This study has revealed bilateral amygdala activation to emotional expressions in Asian and European females and males. In the Asian sample, a stronger response of the amygdala bilaterally was observed and this was paralleled by reduced performance, especially for anger and disgust depicted by male expressions. However, no gender difference occurred. Taken together, while gender exerts only a subtle effect, culture and duration of stay as well as gender of poser are shown to be relevant factors for emotion processing, influencing not only behavioral but also neural responses in female and male immigrants.
Culture but not gender modulates amygdala activation during explicit emotion recognition
2012-01-01
Background Mounting evidence indicates that humans have significant difficulties in understanding emotional expressions from individuals of different ethnic backgrounds, leading to reduced recognition accuracy and stronger amygdala activation. However, the impact of gender on the behavioral and neural reactions during the initial phase of cultural assimilation has not been addressed. Therefore, we investigated 24 Asians students (12 females) and 24 age-matched European students (12 females) during an explicit emotion recognition task, using Caucasian facial expressions only, on a high-field MRI scanner. Results Analysis of functional data revealed bilateral amygdala activation to emotional expressions in Asian and European subjects. However, in the Asian sample, a stronger response of the amygdala emerged and was paralleled by reduced recognition accuracy, particularly for angry male faces. Moreover, no significant gender difference emerged. We also observed a significant inverse correlation between duration of stay and amygdala activation. Conclusion In this study we investigated the “alien-effect” as an initial problem during cultural assimilation and examined this effect on a behavioral and neural level. This study has revealed bilateral amygdala activation to emotional expressions in Asian and European females and males. In the Asian sample, a stronger response of the amygdala bilaterally was observed and this was paralleled by reduced performance, especially for anger and disgust depicted by male expressions. However, no gender difference occurred. Taken together, while gender exerts only a subtle effect, culture and duration of stay as well as gender of poser are shown to be relevant factors for emotion processing, influencing not only behavioral but also neural responses in female and male immigrants. PMID:22642400
Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?
Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James
2017-01-01
Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives. PMID:28068393
Feasibility Testing of a Wearable Behavioral Aid for Social Learning in Children with Autism.
Daniels, Jena; Haber, Nick; Voss, Catalin; Schwartz, Jessey; Tamura, Serena; Fazel, Azar; Kline, Aaron; Washington, Peter; Phillips, Jennifer; Winograd, Terry; Feinstein, Carl; Wall, Dennis P
2018-01-01
Recent advances in computer vision and wearable technology have created an opportunity to introduce mobile therapy systems for autism spectrum disorders (ASD) that can respond to the increasing demand for therapeutic interventions; however, feasibility questions must be answered first. We studied the feasibility of a prototype therapeutic tool for children with ASD using Google Glass, examining whether children with ASD would wear such a device, if providing the emotion classification will improve emotion recognition, and how emotion recognition differs between ASD participants and neurotypical controls (NC). We ran a controlled laboratory experiment with 43 children: 23 with ASD and 20 NC. Children identified static facial images on a computer screen with one of 7 emotions in 3 successive batches: the first with no information about emotion provided to the child, the second with the correct classification from the Glass labeling the emotion, and the third again without emotion information. We then trained a logistic regression classifier on the emotion confusion matrices generated by the two information-free batches to predict ASD versus NC. All 43 children were comfortable wearing the Glass. ASD and NC participants who completed the computer task with Glass providing audible emotion labeling ( n = 33) showed increased accuracies in emotion labeling, and the logistic regression classifier achieved an accuracy of 72.7%. Further analysis suggests that the ability to recognize surprise, fear, and neutrality may distinguish ASD cases from NC. This feasibility study supports the utility of a wearable device for social affective learning in ASD children and demonstrates subtle differences in how ASD and NC children perform on an emotion recognition task. Schattauer GmbH Stuttgart.
Emotion recognition based on physiological changes in music listening.
Kim, Jonghwa; André, Elisabeth
2008-12-01
Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.
Wingenbach, Tanja S. H.
2016-01-01
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author. PMID:26784347
Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark
2016-01-01
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
Actively Paranoid Patients with Schizophrenia Over Attribute Anger to Neutral Faces
Pinkham, Amy E.; Brensinger, Colleen; Kohler, Christian; Gur, Raquel E.; Gur, Ruben C.
2010-01-01
Previous investigations of the influence of paranoia on facial affect recognition in schizophrenia have been inconclusive as some studies demonstrate better performance for paranoid relative to non-paranoid patients and others show that paranoid patients display greater impairments. These studies have been limited by small sample sizes and inconsistencies in the criteria used to define groups. Here, we utilized an established emotion recognition task and a large sample to examine differential performance in emotion recognition ability between patients who were actively paranoid (AP) and those who were not actively paranoid (NAP). Accuracy and error patterns on the Penn Emotion Recognition test (ER40) were examined in 132 patients (64 NAP and 68 AP). Groups were defined based on the presence of paranoid ideation at the time of testing rather than diagnostic subtype. AP and NAP patients did not differ in overall task accuracy; however, an emotion by group interaction indicated that AP patients were significantly worse than NAP patients at correctly labeling neutral faces. A comparison of error patterns on neutral stimuli revealed that the groups differed only in misattributions of anger expressions, with AP patients being significantly more likely to misidentify a neutral expression as angry. The present findings suggest that paranoia is associated with a tendency to over attribute threat to ambiguous stimuli and also lend support to emerging hypotheses of amygdala hyperactivation as a potential neural mechanism for paranoid ideation. PMID:21112186
Emotion through locomotion: gender impact.
Krüger, Samuel; Sokolov, Alexander N; Enck, Paul; Krägeloh-Mann, Ingeborg; Pavlova, Marina A
2013-01-01
Body language reading is of significance for daily life social cognition and successful social interaction, and constitutes a core component of social competence. Yet it is unclear whether our ability for body language reading is gender specific. In the present work, female and male observers had to visually recognize emotions through point-light human locomotion performed by female and male actors with different emotional expressions. For subtle emotional expressions only, males surpass females in recognition accuracy and readiness to respond to happy walking portrayed by female actors, whereas females exhibit a tendency to be better in recognition of hostile angry locomotion expressed by male actors. In contrast to widespread beliefs about female superiority in social cognition, the findings suggest that gender effects in recognition of emotions from human locomotion are modulated by emotional content of actions and opposite actor gender. In a nutshell, the study makes a further step in elucidation of gender impact on body language reading and on neurodevelopmental and psychiatric deficits in visual social cognition.
Herba, Catherine; Phillips, Mary
2004-10-01
Intact emotion processing is critical for normal emotional development. Recent advances in neuroimaging have facilitated the examination of brain development, and have allowed for the exploration of the relationships between the development of emotion processing abilities, and that of associated neural systems. A literature review was performed of published studies examining the development of emotion expression recognition in normal children and psychiatric populations, and of the development of neural systems important for emotion processing. Few studies have explored the development of emotion expression recognition throughout childhood and adolescence. Behavioural studies suggest continued development throughout childhood and adolescence (reflected by accuracy scores and speed of processing), which varies according to the category of emotion displayed. Factors such as sex, socio-economic status, and verbal ability may also affect this development. Functional neuroimaging studies in adults highlight the role of the amygdala in emotion processing. Results of the few neuroimaging studies in children have focused on the role of the amygdala in the recognition of fearful expressions. Although results are inconsistent, they provide evidence throughout childhood and adolescence for the continued development of and sex differences in amygdalar function in response to fearful expressions. Studies exploring emotion expression recognition in psychiatric populations of children and adolescents suggest deficits that are specific to the type of disorder and to the emotion displayed. Results from behavioural and neuroimaging studies indicate continued development of emotion expression recognition and neural regions important for this process throughout childhood and adolescence. Methodological inconsistencies and disparate findings make any conclusion difficult, however. Further studies are required examining the relationship between the development of emotion expression recognition and that of underlying neural systems, in particular subcortical and prefrontal cortical structures. These will inform understanding of the neural bases of normal and abnormal emotional development, and aid the development of earlier interventions for children and adolescents with psychiatric disorders.
Emotion recognition from speech: tools and challenges
NASA Astrophysics Data System (ADS)
Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.
2015-05-01
Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.
Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations
Laukka, Petri; Elfenbein, Hillary Anger; Söder, Nela; Nordström, Henrik; Althoff, Jean; Chui, Wanda; Iraki, Frederick K.; Rockstuhl, Thomas; Thingujam, Nutankumar S.
2013-01-01
Which emotions are associated with universally recognized non-verbal signals?We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations. PMID:23914178
Rigon, Arianna; Turkstra, Lyn; Mutlu, Bilge; Duff, Melissa
2018-01-01
Although moderate to severe traumatic brain injury (TBI) leads to facial affect recognition impairments in up to 39% of individuals, protective and risk factors for these deficits are unknown. The aim of the current study was to examine the effect of sex on emotion recognition abilities following TBI. We administered two separate emotion recognition tests (one static and one dynamic) to 53 individuals with moderate to severe TBI (Females=28) and 49 demographically matched comparisons (Females=22). We then investigated the presence of a sex-by-group interaction in emotion recognition accuracy. In the comparison group, there were no sex differences. In the TBI group, however, females significantly outperformed males in the dynamic (but not the static) task. Moreover, males (but not females) with TBI performed significantly worse than comparison participants in the dynamic task. Further analysis revealed that sex differences in emotion recognition abilities within the TBI group could not be explained by lesion location, TBI severity, or other neuropsychological variables. These findings suggest that sex may serve as a protective factor for social impairment following TBI and inform clinicians working with TBI as well as research on the neurophysiological correlates of sex differences in social functioning. PMID:27245826
CACNA1C risk variant affects facial emotion recognition in healthy individuals.
Nieratschker, Vanessa; Brückmann, Christof; Plewnia, Christian
2015-11-27
Recognition and correct interpretation of facial emotion is essential for social interaction and communication. Previous studies have shown that impairments in this cognitive domain are common features of several psychiatric disorders. Recent association studies identified CACNA1C as one of the most promising genetic risk factors for psychiatric disorders and previous evidence suggests that the most replicated risk variant in CACNA1C (rs1006737) is affecting emotion recognition and processing. However, studies investigating the influence of rs1006737 on this intermediate phenotype in healthy subjects at the behavioral level are largely missing to date. Here, we applied the "Reading the Mind in the Eyes" test, a facial emotion recognition paradigm in a cohort of 92 healthy individuals to address this question. Whereas accuracy was not affected by genotype, CACNA1C rs1006737 risk-allele carries (AA/AG) showed significantly slower mean response times compared to individuals homozygous for the G-allele, indicating that healthy risk-allele carriers require more information to correctly identify a facial emotion. Our study is the first to provide evidence for an impairing behavioral effect of the CACNA1C risk variant rs1006737 on facial emotion recognition in healthy individuals and adds to the growing number of studies pointing towards CACNA1C as affecting intermediate phenotypes of psychiatric disorders.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2014-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2015-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.
Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V
2014-07-01
Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gender differences in the relationship between social communication and emotion recognition.
Kothari, Radha; Skuse, David; Wakefield, Justin; Micali, Nadia
2013-11-01
To investigate the association between autistic traits and emotion recognition in a large community sample of children using facial and social motion cues, additionally stratifying by gender. A general population sample of 3,666 children from the Avon Longitudinal Study of Parents and Children (ALSPAC) were assessed on their ability to correctly recognize emotions using the faces subtest of the Diagnostic Analysis of Non-Verbal Accuracy, and the Emotional Triangles Task, a novel test assessing recognition of emotion from social motion cues. Children with autistic-like social communication difficulties, as assessed by the Social Communication Disorders Checklist, were compared with children without such difficulties. Autistic-like social communication difficulties were associated with poorer recognition of emotion from social motion cues in both genders, but were associated with poorer facial emotion recognition in boys only (odds ratio = 1.9, 95% CI = 1.4, 2.6, p = .0001). This finding must be considered in light of lower power to detect differences in girls. In this community sample of children, greater deficits in social communication skills are associated with poorer discrimination of emotions, implying there may be an underlying continuum of liability to the association between these characteristics. As a similar degree of association was observed in both genders on a novel test of social motion cues, the relatively good performance of girls on the more familiar task of facial emotion discrimination may be due to compensatory mechanisms. Our study might indicate the existence of a cognitive process by which girls with underlying autistic traits can compensate for their covert deficits in emotion recognition, although this would require further investigation. Copyright © 2013 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Wang, Pengyun; Li, Juan; Li, Huijie; Li, Bing; Jiang, Yang; Bao, Feng; Zhang, Shouzi
2013-11-01
This study investigated whether the observed absence of emotional memory enhancement in recognition tasks in patients with amnestic mild cognitive impairment (aMCI) could be related to their greater proportion of familiarity-based responses for all stimuli, and whether recognition tests with emotional items had better discriminative power for aMCI patients than those with neutral items. In total, 31 aMCI patients and 30 healthy older adults participated in a recognition test followed by remember/know judgments. Positive, neutral, and negative faces were used as stimuli. For overall recognition performance, emotional memory enhancement was found only in healthy controls; they remembered more negative and positive stimuli than neutral ones. For "remember" responses, we found equivalent emotional memory enhancement in both groups, though a greater proportion of "remember" responses was observed in normal controls. For "know" responses, aMCI patients presented a larger proportion than normal controls did, and their "know" responses were not affected by emotion. A negative correlation was found between emotional enhancement effect and the memory performance related to "know" responses. In addition, receiver operating characteristic curve analysis revealed higher diagnostic accuracy for recognition test with emotional stimuli than with neutral stimuli. The present results implied that the absence of the emotional memory enhancement effect in aMCI patients might be related to their tendency to rely more on familiarity-based "know" responses for all stimuli. Furthermore, recognition memory tests using emotional stimuli may be better able than neutral stimuli to differentiate people with aMCI from cognitively normal older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.
A Diffusion Model Analysis of Decision Biases Affecting Delayed Recognition of Emotional Stimuli
Bowen, Holly J.; Spaniol, Julia; Patel, Ronak; Voss, Andreas
2016-01-01
Previous empirical work suggests that emotion can influence accuracy and cognitive biases underlying recognition memory, depending on the experimental conditions. The current study examines the effects of arousal and valence on delayed recognition memory using the diffusion model, which allows the separation of two decision biases thought to underlie memory: response bias and memory bias. Memory bias has not been given much attention in the literature but can provide insight into the retrieval dynamics of emotion modulated memory. Participants viewed emotional pictorial stimuli; half were given a recognition test 1-day later and the other half 7-days later. Analyses revealed that emotional valence generally evokes liberal responding, whereas high arousal evokes liberal responding only at a short retention interval. The memory bias analyses indicated that participants experienced greater familiarity with high-arousal compared to low-arousal items and this pattern became more pronounced as study-test lag increased; positive items evoke greater familiarity compared to negative and this pattern remained stable across retention interval. The findings provide insight into the separate contributions of valence and arousal to the cognitive mechanisms underlying delayed emotion modulated memory. PMID:26784108
Smitha, K G; Vinod, A P
2015-11-01
Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.
Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn
2016-01-01
While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.
Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali
2018-05-31
Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Study of wavelet packet energy entropy for emotion classification in speech and glottal signals
NASA Astrophysics Data System (ADS)
He, Ling; Lech, Margaret; Zhang, Jing; Ren, Xiaomei; Deng, Lihua
2013-07-01
The automatic speech emotion recognition has important applications in human-machine communication. Majority of current research in this area is focused on finding optimal feature parameters. In recent studies, several glottal features were examined as potential cues for emotion differentiation. In this study, a new type of feature parameter is proposed, which calculates energy entropy on values within selected Wavelet Packet frequency bands. The modeling and classification tasks are conducted using the classical GMM algorithm. The experiments use two data sets: the Speech Under Simulated Emotion (SUSE) data set annotated with three different emotions (angry, neutral and soft) and Berlin Emotional Speech (BES) database annotated with seven different emotions (angry, bored, disgust, fear, happy, sad and neutral). The average classification accuracy achieved for the SUSE data (74%-76%) is significantly higher than the accuracy achieved for the BES data (51%-54%). In both cases, the accuracy was significantly higher than the respective random guessing levels (33% for SUSE and 14.3% for BES).
State-dependent alteration in face emotion recognition in depression.
Anderson, Ian M; Shippen, Clare; Juhasz, Gabriella; Chase, Diana; Thomas, Emma; Downey, Darragh; Toth, Zoltan G; Lloyd-Williams, Kathryn; Elliott, Rebecca; Deakin, J F William
2011-04-01
Negative biases in emotional processing are well recognised in people who are currently depressed but are less well described in those with a history of depression, where such biases may contribute to vulnerability to relapse. To compare accuracy, discrimination and bias in face emotion recognition in those with current and remitted depression. The sample comprised a control group (n = 101), a currently depressed group (n = 30) and a remitted depression group (n = 99). Participants provided valid data after receiving a computerised face emotion recognition task following standardised assessment of diagnosis and mood symptoms. In the control group women were more accurate in recognising emotions than men owing to greater discrimination. Among participants with depression, those in remission correctly identified more emotions than controls owing to increased response bias, whereas those currently depressed recognised fewer emotions owing to decreased discrimination. These effects were most marked for anger, fear and sadness but there was no significant emotion × group interaction, and a similar pattern tended to be seen for happiness although not for surprise or disgust. These differences were confined to participants who were antidepressant-free, with those taking antidepressants having similar results to the control group. Abnormalities in face emotion recognition differ between people with current depression and those in remission. Reduced discrimination in depressed participants may reflect withdrawal from the emotions of others, whereas the increased bias in those with a history of depression could contribute to vulnerability to relapse. The normal face emotion recognition seen in those taking medication may relate to the known effects of antidepressants on emotional processing and could contribute to their ability to protect against depressive relapse.
Niedtfeld, Inga; Defiebre, Nadine; Regenbogen, Christina; Mier, Daniela; Fenske, Sabrina; Kirsch, Peter; Lis, Stefanie; Schmahl, Christian
2017-04-01
Previous research has revealed alterations and deficits in facial emotion recognition in patients with borderline personality disorder (BPD). During interpersonal communication in daily life, social signals such as speech content, variation in prosody, and facial expression need to be considered simultaneously. We hypothesized that deficits in higher level integration of social stimuli contribute to difficulties in emotion recognition in BPD, and heightened arousal might explain this effect. Thirty-one patients with BPD and thirty-one healthy controls were asked to identify emotions in short video clips, which were designed to represent different combinations of the three communication channels: facial expression, speech content, and prosody. Skin conductance was recorded as a measure of sympathetic arousal, while controlling for state dissociation. Patients with BPD showed lower mean accuracy scores than healthy control subjects in all conditions comprising emotional facial expressions. This was true for the condition with facial expression only, and for the combination of all three communication channels. Electrodermal responses were enhanced in BPD only in response to auditory stimuli. In line with the major body of facial emotion recognition studies, we conclude that deficits in the interpretation of facial expressions lead to the difficulties observed in multimodal emotion processing in BPD.
Development and validation of an Argentine set of facial expressions of emotion.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
2017-02-01
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
[Perception features of emotional intonation of short pseudowords].
Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M
2012-01-01
Reaction time and recognition accuracy of speech emotional intonations in short meaningless words that differed only in one phoneme with background noise and without it were studied in 49 adults of 20-79 years old. The results were compared with the same parameters of emotional intonations in intelligent speech utterances under similar conditions. Perception of emotional intonations at different linguistic levels (phonological and lexico-semantic) was found to have both common features and certain peculiarities. Recognition characteristics of emotional intonations depending on gender and age of listeners appeared to be invariant with regard to linguistic levels of speech stimuli. Phonemic composition of pseudowords was found to influence the emotional perception, especially against the background noise. The most significant stimuli acoustic characteristic responsible for the perception of speech emotional prosody in short meaningless words under the two experimental conditions, i.e. with and without background noise, was the fundamental frequency variation.
Actively paranoid patients with schizophrenia over attribute anger to neutral faces.
Pinkham, Amy E; Brensinger, Colleen; Kohler, Christian; Gur, Raquel E; Gur, Ruben C
2011-02-01
Previous investigations of the influence of paranoia on facial affect recognition in schizophrenia have been inconclusive as some studies demonstrate better performance for paranoid relative to non-paranoid patients and others show that paranoid patients display greater impairments. These studies have been limited by small sample sizes and inconsistencies in the criteria used to define groups. Here, we utilized an established emotion recognition task and a large sample to examine differential performance in emotion recognition ability between patients who were actively paranoid (AP) and those who were not actively paranoid (NAP). Accuracy and error patterns on the Penn Emotion Recognition test (ER40) were examined in 132 patients (64 NAP and 68 AP). Groups were defined based on the presence of paranoid ideation at the time of testing rather than diagnostic subtype. AP and NAP patients did not differ in overall task accuracy; however, an emotion by group interaction indicated that AP patients were significantly worse than NAP patients at correctly labeling neutral faces. A comparison of error patterns on neutral stimuli revealed that the groups differed only in misattributions of anger expressions, with AP patients being significantly more likely to misidentify a neutral expression as angry. The present findings suggest that paranoia is associated with a tendency to over attribute threat to ambiguous stimuli and also lend support to emerging hypotheses of amygdala hyperactivation as a potential neural mechanism for paranoid ideation. Copyright © 2010 Elsevier B.V. All rights reserved.
Deficits in Facial Emotion Recognition in Schizophrenia: A Replication Study with Korean Subjects
Lee, Seung Jae; Lee, Hae-Kook; Kweon, Yong-Sil; Lee, Chung Tai
2010-01-01
Objective We investigated the deficit in the recognition of facial emotions in a sample of medicated, stable Korean patients with schizophrenia using Korean facial emotion pictures and examined whether the possible impairments would corroborate previous findings. Methods Fifty-five patients with schizophrenia and 62 healthy control subjects completed the Facial Affect Identification Test with a new set of 44 colored photographs of Korean faces including the six universal emotions as well as neutral faces. Results Korean patients with schizophrenia showed impairments in the recognition of sad, fearful, and angry faces [F(1,114)=6.26, p=0.014; F(1,114)=6.18, p=0.014; F(1,114)=9.28, p=0.003, respectively], but their accuracy was no different from that of controls in the recognition of happy emotions. Higher total and three subscale scores of the Positive and Negative Syndrome Scale (PANSS) correlated with worse performance on both angry and neutral faces. Correct responses on happy stimuli were negatively correlated with negative symptom scores of the PANSS. Patients with schizophrenia also exhibited different patterns of misidentification relative to normal controls. Conclusion These findings were consistent with previous studies carried out with different ethnic groups, suggesting cross-cultural similarities in facial recognition impairment in schizophrenia. PMID:21253414
Emotion recognition in fathers and mothers at high-risk for child physical abuse.
Asla, Nagore; de Paúl, Joaquín; Pérez-Albéniz, Alicia
2011-09-01
The present study was designed to determine whether parents at high risk for physical child abuse, in comparison with parents at low risk, show deficits in emotion recognition, as well as to examine the moderator effect of gender and stress on the relationship between risk for physical child abuse and emotion recognition. Based on their scores on the Abuse Scale of the CAP Inventory (Milner, 1986), 64 parents at high risk (24 fathers and 40 mothers) and 80 parents at low risk (40 fathers and 40 mothers) for physical child abuse were selected. The Subtle Expression Training Tool/Micro Expression Training Tool (Ekman, 2004a, 2004b) and the Diagnostic Analysis of Nonverbal Accuracy II (Nowicki & Carton, 1993) were used to assess emotion recognition. As expected, parents at high risk, in contrast to parents at low risk, showed deficits in emotion recognition. However, differences between high- and low-risk participants were observed only for fathers, but not for mothers. Whereas fathers at high risk for physical child abuse made more errors than mothers at high risk, no differences between mothers at low risk and fathers at low risk were found. No interaction between stress, gender, and risk status was observed for errors in emotion recognition. The present findings, if confirmed with physical abusers, could be helpful to further our understanding of deficits in processing information of physically abusive parents and to develop treatment strategies specifically focused on emotion recognition. Moreover, if gender differences can be confirmed, the findings could be helpful to develop specific treatment programs for abusive fathers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt
2018-04-01
The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Vogel, Bastian D; Brück, Carolin; Jacob, Heike; Eberle, Mark; Wildgruber, Dirk
2016-07-07
Impaired interpretation of nonverbal emotional cues in patients with schizophrenia has been reported in several studies and a clinical relevance of these deficits for social functioning has been assumed. However, it is unclear to what extent the impairments depend on specific emotions or specific channels of nonverbal communication. Here, the effect of cue modality and emotional categories on accuracy of emotion recognition was evaluated in 21 patients with schizophrenia and compared to a healthy control group (n = 21). To this end, dynamic stimuli comprising speakers of both genders in three different sensory modalities (auditory, visual and audiovisual) and five emotional categories (happy, alluring, neutral, angry and disgusted) were used. Patients with schizophrenia were found to be impaired in emotion recognition in comparison to the control group across all stimuli. Considering specific emotions more severe deficits were revealed in the recognition of alluring stimuli and less severe deficits in the recognition of disgusted stimuli as compared to all other emotions. Regarding cue modality the extent of the impairment in emotional recognition did not significantly differ between auditory and visual cues across all emotional categories. However, patients with schizophrenia showed significantly more severe disturbances for vocal as compared to facial cues when sexual interest is expressed (alluring stimuli), whereas more severe disturbances for facial as compared to vocal cues were observed when happiness or anger is expressed. Our results confirmed that perceptual impairments can be observed for vocal as well as facial cues conveying various social and emotional connotations. The observed differences in severity of impairments with most severe deficits for alluring expressions might be related to specific difficulties in recognizing the complex social emotional information of interpersonal intentions as compared to "basic" emotional states. Therefore, future studies evaluating perception of nonverbal cues should consider a broader range of social and emotional signals beyond basic emotions including attitudes and interpersonal intentions. Identifying specific domains of social perception particularly prone for misunderstandings in patients with schizophrenia might allow for a refinement of interventions aiming at improving social functioning.
Dunn, Erin C; Crawford, Katherine M; Soare, Thomas W; Button, Katherine S; Raffeld, Miriam R; Smith, Andrew D A C; Penton-Voak, Ian S; Munafò, Marcus R
2018-03-07
Emotion recognition skills are essential for social communication. Deficits in these skills have been implicated in mental disorders. Prior studies of clinical and high-risk samples have consistently shown that children exposed to adversity are more likely than their unexposed peers to have emotion recognition skills deficits. However, only one population-based study has examined this association. We analyzed data from children participating in the Avon Longitudinal Study of Parents and Children, a prospective birth cohort (n = 6,506). We examined the association between eight adversities, assessed repeatedly from birth to age 8 (caregiver physical or emotional abuse; sexual or physical abuse; maternal psychopathology; one adult in the household; family instability; financial stress; parent legal problems; neighborhood disadvantage) and the ability to recognize facial displays of emotion measured using the faces subtest of the Diagnostic Assessment of Non-Verbal Accuracy (DANVA) at age 8.5 years. In addition to examining the role of exposure (vs. nonexposure) to each type of adversity, we also evaluated the role of the timing, duration, and recency of each adversity using a Least Angle Regression variable selection procedure. Over three-quarters of the sample experienced at least one adversity. We found no evidence to support an association between emotion recognition deficits and previous exposure to adversity, either in terms of total lifetime exposure, timing, duration, or recency, or when stratifying by sex. Results from the largest population-based sample suggest that even extreme forms of adversity are unrelated to emotion recognition deficits as measured by the DANVA, suggesting the possible immutability of emotion recognition in the general population. These findings emphasize the importance of population-based studies to generate generalizable results. © 2018 Association for Child and Adolescent Mental Health.
Analysis of physiological signals for recognition of boredom, pain, and surprise emotions.
Jang, Eun-Hye; Park, Byoung-Jun; Park, Mi-Sook; Kim, Sang-Hyeob; Sohn, Jin-Hun
2015-06-18
The aim of the study was to examine the differences of boredom, pain, and surprise. In addition to that, it was conducted to propose approaches for emotion recognition based on physiological signals. Three emotions, boredom, pain, and surprise, are induced through the presentation of emotional stimuli and electrocardiography (ECG), electrodermal activity (EDA), skin temperature (SKT), and photoplethysmography (PPG) as physiological signals are measured to collect a dataset from 217 participants when experiencing the emotions. Twenty-seven physiological features are extracted from the signals to classify the three emotions. The discriminant function analysis (DFA) as a statistical method, and five machine learning algorithms (linear discriminant analysis (LDA), classification and regression trees (CART), self-organizing map (SOM), Naïve Bayes algorithm, and support vector machine (SVM)) are used for classifying the emotions. The result shows that the difference of physiological responses among emotions is significant in heart rate (HR), skin conductance level (SCL), skin conductance response (SCR), mean skin temperature (meanSKT), blood volume pulse (BVP), and pulse transit time (PTT), and the highest recognition accuracy of 84.7% is obtained by using DFA. This study demonstrates the differences of boredom, pain, and surprise and the best emotion recognizer for the classification of the three emotions by using physiological signals.
Derntl, Birgit; Habel, Ute; Windischberger, Christian; Robinson, Simon; Kryspin-Exner, Ilse; Gur, Ruben C; Moser, Ewald
2009-08-04
The ability to recognize emotions in facial expressions relies on an extensive neural network with the amygdala as the key node as has typically been demonstrated for the processing of fearful stimuli. A sufficient characterization of the factors influencing and modulating amygdala function, however, has not been reached now. Due to lacking or diverging results on its involvement in recognizing all or only certain negative emotions, the influence of gender or ethnicity is still under debate. This high-resolution fMRI study addresses some of the relevant parameters, such as emotional valence, gender and poser ethnicity on amygdala activation during facial emotion recognition in 50 Caucasian subjects. Stimuli were color photographs of emotional Caucasian and African American faces. Bilateral amygdala activation was obtained to all emotional expressions (anger, disgust, fear, happy, and sad) and neutral faces across all subjects. However, only in males a significant correlation of amygdala activation and behavioral response to fearful stimuli was observed, indicating higher amygdala responses with better fear recognition, thus pointing to subtle gender differences. No significant influence of poser ethnicity on amygdala activation occurred, but analysis of recognition accuracy revealed a significant impact of poser ethnicity that was emotion-dependent. Applying high-resolution fMRI while subjects were performing an explicit emotion recognition task revealed bilateral amygdala activation to all emotions presented and neutral expressions. This mechanism seems to operate similarly in healthy females and males and for both in-group and out-group ethnicities. Our results support the assumption that an intact amygdala response is fundamental in the processing of these salient stimuli due to its relevance detecting function.
Demirel, Husrev; Yesilbas, Dilek; Ozver, Ismail; Yuksek, Erhan; Sahin, Feyzi; Aliustaoglu, Suheyla; Emul, Murat
2014-04-01
It is well known that patients with bipolar disorder are more prone to violence and have more criminal behaviors than general population. A strong relationship between criminal behavior and inability to empathize and imperceptions to other person's feelings and facial expressions increases the risk of delinquent behaviors. In this study, we aimed to investigate the deficits of facial emotion recognition ability in euthymic bipolar patients who committed an offense and compare with non-delinquent euthymic patients with bipolar disorder. Fifty-five euthymic patients with delinquent behaviors and 54 non-delinquent euthymic bipolar patients as a control group were included in the study. Ekman's Facial Emotion Recognition Test, sociodemographic data, Hare Psychopathy Checklist, Hamilton Depression Rating Scale and Young Mania Rating Scale were applied to both groups. There were no significant differences between case and control groups in the meaning of average age, gender, level of education, mean age onset of disease and suicide attempt (p>0.05). The three types of most committed delinquent behaviors in patients with euthymic bipolar disorder were as follows: injury (30.8%), threat or insult (20%) and homicide (12.7%). The best accurate percentage of identified facial emotion was "happy" (>99%, for both) while the worst misidentified facial emotion was "fear" in both groups (<50%, for both). The total accuracy rate of recognition toward facial emotions was significantly impaired in patients with delinquent behaviors than non-delinquent ones (p<0.05). The accuracy rate of recognizing the fear expressions was significantly worse in the case group than in the control group (p<0.05). In addition, it tended to be worse toward angry facial expressions in criminal euthymic bipolar patients. The response times toward happy, fear, disgusted and angry expressions had been significantly longer in the case group than in the control group (p<0.05). This study is the first, searching the ability of facial emotion recognition in euthymic patients with bipolar disorder who had delinquent behaviors. We have shown that patients with bipolar disorder who had delinquent behaviors may have some social interaction problems i.e., misrecognizing fearful and modestly anger facial emotions and need some more time to response facial emotions even in remission. Copyright © 2014 Elsevier Inc. All rights reserved.
The Influence of Emotion on Keyboard Typing: An Experimental Study Using Auditory Stimuli.
Lee, Po-Ming; Tsui, Wei-Hsuan; Hsiao, Tzu-Chien
2015-01-01
In recent years, a novel approach for emotion recognition has been reported, which is by keystroke dynamics. The advantages of using this approach are that the data used is rather non-intrusive and easy to obtain. However, there were only limited investigations about the phenomenon itself in previous studies. Hence, this study aimed to examine the source of variance in keyboard typing patterns caused by emotions. A controlled experiment to collect subjects' keystroke data in different emotional states induced by International Affective Digitized Sounds (IADS) was conducted. Two-way Valence (3) x Arousal (3) ANOVAs was used to examine the collected dataset. The results of the experiment indicate that the effect of arousal is significant in keystroke duration (p < .05), keystroke latency (p < .01), but not in the accuracy rate of keyboard typing. The size of the emotional effect is small, compared to the individual variability. Our findings support the conclusion that the keystroke duration and latency are influenced by arousal. The finding about the size of the effect suggests that the accuracy rate of emotion recognition technology could be further improved if personalized models are utilized. Notably, the experiment was conducted using standard instruments and hence is expected to be highly reproducible.
The Relationship between Child Maltreatment and Emotion Recognition
Koizumi, Michiko; Takagishi, Haruto
2014-01-01
Child abuse and neglect affect the development of social cognition in children and inhibit social adjustment. The purpose of this study was to compare the ability to identify the emotional states of others between abused and non-abused children. The participants, 129 children (44 abused and 85 non-abused children), completed a children’s version of the Reading the Mind in the Eyes Test (RMET). Results showed that the mean accuracy rate on the RMET for abused children was significantly lower than the rate of the non-abused children. In addition, the accuracy rates for positive emotion items (e.g., hoping, interested, happy) were significantly lower for the abused children, but negative emotion and neutral items were not different across the groups. This study found a negative relationship between child abuse and the ability to understand others’ emotions, especially positive emotions. PMID:24465891
Moeller, Sara K; Lee, Elizabeth A Ewing; Robinson, Michael D
2011-08-01
Dominance and submission constitute fundamentally different social interaction strategies that may be enacted most effectively to the extent that the emotions of others are relatively ignored (dominance) versus noticed (submission). On the basis of such considerations, we hypothesized a systematic relationship between chronic tendencies toward high versus low levels of interpersonal dominance and emotion decoding accuracy in objective tasks. In two studies (total N = 232), interpersonally dominant individuals exhibited poorer levels of emotion recognition in response to audio and video clips (Study 1) and facial expressions of emotion (Study 2). The results provide a novel perspective on interpersonal dominance, suggest its strategic nature (Study 2), and are discussed in relation to Fiske's (1993) social-cognitive theory of power. 2011 APA, all rights reserved
EEG-based emotion recognition in music listening.
Lin, Yuan-Pin; Wang, Chi-Hong; Jung, Tzyy-Ping; Wu, Tien-Lin; Jeng, Shyh-Kang; Duann, Jeng-Ren; Chen, Jyh-Horng
2010-07-01
Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
Xu, Lei; Ma, Xiaole; Zhao, Weihua; Luo, Lizhu; Yao, Shuxia; Kendrick, Keith M
2015-12-01
There is considerable interest in the potential therapeutic role of the neuropeptide oxytocin in altering attentional bias towards emotional social stimuli in psychiatric disorders. However, it is still unclear whether oxytocin primarily influences attention towards positive or negative valence social stimuli. Here in a double-blind, placebo controlled, between subject design experiment in 60 healthy male subjects we have used the highly sensitive dual-target rapid serial visual presentation (RSVP) paradigm to investigate whether intranasal oxytocin (40IU) treatment alters attentional bias for emotional faces. Results show that oxytocin improved recognition accuracy of neutral and happy expression faces presented in the second target position (T2) during the period of reduced attentional capacity following prior presentation of a first neutral face target (T1), but had no effect on recognition of negative expression faces (angry, fearful, sad). Oxytocin also had no effect on recognition of non-social stimuli (digits) in this task. Recognition accuracy for neutral faces at T2 was negatively associated with autism spectrum quotient (ASQ) scores in the placebo group, and oxytocin's facilitatory effects were restricted to a sub-group of subjects with higher ASQ scores. Our results therefore indicate that oxytocin primarily enhances the allocation of attentional resources towards faces expressing neutral or positive emotion and does not influence that towards negative emotion ones or non-social stimuli. This effect of oxytocin is strongest in healthy individuals with higher autistic trait scores, thereby providing further support for its potential therapeutic use in autism spectrum disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.
Facial decoding in schizophrenia is underpinned by basic visual processing impairments.
Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric
2017-09-01
Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis.
Thompson, Ashley E; Voyer, Daniel
2014-01-01
The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d=0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.
The effects of emotion on memory for music and vocalisations.
Aubé, William; Peretz, Isabelle; Armony, Jorge L
2013-01-01
Music is a powerful tool for communicating emotions which can elicit memories through associative mechanisms. However, it is currently unknown whether emotion can modulate memory for music without reference to a context or personal event. We conducted three experiments to investigate the effect of basic emotions (fear, happiness, and sadness) on recognition memory for music, using short, novel stimuli explicitly created for research purposes, and compared them with nonlinguistic vocalisations. Results showed better memory accuracy for musical clips expressing fear and, to some extent, happiness. In the case of nonlinguistic vocalisations we confirmed a memory advantage for all emotions tested. A correlation between memory accuracy for music and vocalisations was also found, particularly in the case of fearful expressions. These results confirm that emotional expressions, particularly fearful ones, conveyed by music can influence memory as has been previously shown for other forms of expressions, such as faces and vocalisations.
Houston, Kate A; Clifford, Brian R; Phillips, Louise H; Memon, Amina
2013-02-01
The present set of experiments aimed to investigate the effects of negative emotion on specific aspects of eyewitness recall and recognition performance. The experience of emotion was manipulated between subjects, with participants either viewing a crime scenario (a mugging) or a neutral scenario (a conversation). Eyewitness recall was categorized into descriptions of the perpetrator, critical incident, victim, and environmental details. The completeness and accuracy of eyewitness recall across categories of detail were measured in Experiment 1. A significant main effect of negative emotion was found for the completeness of recall. Furthermore, a significant main effect of the completeness of eyewitness statements was found, but not for their accuracy. However, these main effects were qualified by a significant interaction between emotion and category of detail recalled. Specifically, emotional participants provided a more complete description of the perpetrator than neutral participants; however, they were less able than their neutral counterparts to describe what the perpetrator did to the victim. In light of these findings, Experiment 2 investigated whether enhanced completeness of perpetrator descriptions during recall translated into an enhanced ability to recognize the perpetrator from a photographic lineup by emotional compared with neutral participants. Results from Experiment 2 suggest that while emotional participants again provide a more complete description of the perpetrator, they are less able than their neutral counterparts to recognize the perpetrator from a photographic lineup. Results are discussed in terms of a retrieval motivation hypothesis of negative emotional experience and the possible consequences for eyewitness testimony. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Tracking Emotional Valence: The Role of the Orbitofrontal Cortex
Goodkind, Madeleine S.; Sollberger, Marc; Gyurak, Anett; Rosen, Howard J.; Rankin, Katherine; Miller, Bruce; Levenson, Robert
2011-01-01
Successful navigation of the social world requires the ability to recognize and track emotions as they unfold and change dynamically. Neuroimaging and neurological studies of emotion recognition have primarily focused on the ability to identify the emotion shown in static photographs of facial expressions, showing correlations with the amygdala as well as temporal and frontal brain regions. In the current study we examined the neural correlates of continuously tracking dynamically-changing emotions. Fifty-nine patients with diverse neurodegenerative diseases used a rating dial to track continuously how positive or how negative the character in a film clip felt. Tracking accuracy was determined by comparing participants’ ratings with the ratings of 10 normal control participants. The relationship between tracking accuracy and regional brain tissue content was examined using voxel-based morphometry. Low tracking accuracy was primarily associated with gray matter loss in the right lateral orbitofrontal cortex (OFC). Our finding that the right OFC is critical to the ability to track dynamically-changing emotions is consistent with previous research showing right OFC involvement in both socioemotional understanding and modifying responding in changing situations. PMID:21425397
Wells, Laura Jean; Gillespie, Steven Mark; Rotshtein, Pia
2016-01-01
The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.
Rotshtein, Pia
2016-01-01
The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down. PMID:27942030
Balconi, M; Cobelli, C
2015-02-26
The present research explored the cortical correlates of emotional memories in response to words and pictures. Subjects' performance (Accuracy Index, AI; response times, RTs; RTs/AI) was considered when a repetitive Transcranial Magnetic Stimulation (rTMS) was applied on the left dorsolateral prefrontal cortex (LDLPFC). Specifically, the role of LDLPFC was tested by performing a memory task, in which old (previously encoded targets) and new (previously not encoded distractors) emotional pictures/words had to be recognized. Valence (positive vs. negative) and arousing power (high vs. low) of stimuli were also modulated. Moreover, subjective evaluation of emotional stimuli in terms of valence/arousal was explored. We found significant performance improving (higher AI, reduced RTs, improved general performance) in response to rTMS. This "better recognition effect" was only related to specific emotional features, that is positive high arousal pictures or words. Moreover no significant differences were found between stimulus categories. A direct relationship was also observed between subjective evaluation of emotional cues and memory performance when rTMS was applied to LDLPFC. Supported by valence and approach model of emotions, we supposed that a left lateralized prefrontal system may induce a better recognition of positive high arousal words, and that evaluation of emotional cue is related to prefrontal activation, affecting the recognition memories of emotions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wallace, Gregory L.; Case, Laura K.; Harms, Madeline B.; Silvers, Jennifer A.; Kenworthy, Lauren; Martin, Alex
2011-01-01
Prior studies implicate facial emotion recognition (FER) difficulties among individuals with autism spectrum disorders (ASD); however, many investigations focus on FER accuracy alone and few examine ecological validity through links with everyday functioning. We compared FER accuracy and perceptual sensitivity (from neutral to full expression)…
Ji, E; Weickert, C S; Lenroot, R; Kindler, J; Skilleter, A J; Vercammen, A; White, C; Gur, R E; Weickert, T W
2016-05-03
Estrogen has been implicated in the development and course of schizophrenia with most evidence suggesting a neuroprotective effect. Treatment with raloxifene, a selective estrogen receptor modulator, can reduce symptom severity, improve cognition and normalize brain activity during learning in schizophrenia. People with schizophrenia are especially impaired in the identification of negative facial emotions. The present study was designed to determine the extent to which adjunctive raloxifene treatment would alter abnormal neural activity during angry facial emotion recognition in schizophrenia. Twenty people with schizophrenia (12 men, 8 women) participated in a 13-week, randomized, double-blind, placebo-controlled, crossover trial of adjunctive raloxifene treatment (120 mg per day orally) and performed a facial emotion recognition task during functional magnetic resonance imaging after each treatment phase. Two-sample t-tests in regions of interest selected a priori were performed to assess activation differences between raloxifene and placebo conditions during the recognition of angry faces. Adjunctive raloxifene significantly increased activation in the right hippocampus and left inferior frontal gyrus compared with the placebo condition (family-wise error, P<0.05). There was no significant difference in performance accuracy or reaction time between active and placebo conditions. To the best of our knowledge, this study provides the first evidence suggesting that adjunctive raloxifene treatment changes neural activity in brain regions associated with facial emotion recognition in schizophrenia. These findings support the hypothesis that estrogen plays a modifying role in schizophrenia and shows that adjunctive raloxifene treatment may reverse abnormal neural activity during facial emotion recognition, which is relevant to impaired social functioning in men and women with schizophrenia.
NASA Astrophysics Data System (ADS)
Campo, D.; Quintero, O. L.; Bastidas, M.
2016-04-01
We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.
Acute Effects of Alcohol on Encoding and Consolidation of Memory for Emotional Stimuli
Weafer, Jessica; Gallo, David A.; De Wit, Harriet
2016-01-01
Objective: Acute doses of alcohol impair memory when administered before encoding of emotionally neutral stimuli but enhance memory when administered immediately after encoding, potentially by affecting memory consolidation. Here, we examined whether alcohol produces similar biphasic effects on memory for positive or negative emotional stimuli. Method: The current study examined memory for emotional stimuli after alcohol (0.8 g/kg) was administered either before stimulus viewing (encoding group; n = 20) or immediately following stimulus viewing (consolidation group; n = 20). A third group received placebo both before and after stimulus viewing (control group; n = 19). Participants viewed the stimuli on one day, and their retrieval was assessed exactly 48 hours later, when they performed a surprise cued recollection and recognition test of the stimuli in a drug-free state. Results: As in previous studies, alcohol administered before encoding impaired memory accuracy, whereas alcohol administered after encoding enhanced memory accuracy. Critically, alcohol effects on cued recollection depended on the valence of the emotional stimuli: Its memory-impairing effects during encoding were greatest for emotional stimuli, whereas its memory-enhancing effects during consolidation were greatest for emotionally neutral stimuli. Effects of alcohol on recognition were not related to stimulus valence. Conclusions: This study extends previous findings with memory for neutral stimuli, showing that alcohol differentially affects the encoding and consolidation of memory for emotional stimuli. These effects of alcohol on memory for emotionally salient material may contribute to the development of alcohol-related problems, perhaps by dampening memory for adverse consequences of alcohol consumption. PMID:26751358
Kosaka, H; Omori, M; Murata, T; Iidaka, T; Yamada, H; Okada, T; Takahashi, T; Sadato, N; Itoh, H; Yonekura, Y; Wada, Y
2002-09-01
Human lesion or neuroimaging studies suggest that amygdala is involved in facial emotion recognition. Although impairments in recognition of facial and/or emotional expression have been reported in schizophrenia, there are few neuroimaging studies that have examined differential brain activation during facial recognition between patients with schizophrenia and normal controls. To investigate amygdala responses during facial recognition in schizophrenia, we conducted a functional magnetic resonance imaging (fMRI) study with 12 right-handed medicated patients with schizophrenia and 12 age- and sex-matched healthy controls. The experiment task was a type of emotional intensity judgment task. During the task period, subjects were asked to view happy (or angry/disgusting/sad) and neutral faces simultaneously presented every 3 s and to judge which face was more emotional (positive or negative face discrimination). Imaging data were investigated in voxel-by-voxel basis for single-group analysis and for between-group analysis according to the random effect model using Statistical Parametric Mapping (SPM). No significant difference in task accuracy was found between the schizophrenic and control groups. Positive face discrimination activated the bilateral amygdalae of both controls and schizophrenics, with more prominent activation of the right amygdala shown in the schizophrenic group. Negative face discrimination activated the bilateral amygdalae in the schizophrenic group whereas the right amygdala alone in the control group, although no significant group difference was found. Exaggerated amygdala activation during emotional intensity judgment found in the schizophrenic patients may reflect impaired gating of sensory input containing emotion. Copyright 2002 Elsevier Science B.V.
Affect recognition across manic and euthymic phases of bipolar disorder in Han-Chinese patients.
Pan, Yi-Ju; Tseng, Huai-Hsuan; Liu, Shi-Kai
2013-11-01
Patients with bipolar disorder (BD) have affect recognition deficits. Whether affect recognition deficits constitute a state or trait marker of BD has great etiopathological significance. The current study aims to explore the interrelationships between affect recognition and basic neurocognitive functions for patients with BD across different mood states, using the Diagnostic Analysis of Non-Verbal Accuracy-2, Taiwanese version (DANVA-2-TW) as the index measure for affect recognition. To our knowledge, this is the first study examining affect recognition deficits of BPD across mood states in the Han Chinese population. Twenty-nine manic patients, 16 remitted patients with BD, and 40 control subjects are included in the study. Distinct association patterns between affect recognition and neurocognitive functions are demonstrated for patients with BD and control subjects, implicating alternations in emotion associated neurocognitive processing. Compared to control subjects, manic patients but not remitted subjects perform significantly worse in the recognition of negative emotions as a whole and specifically anger, after adjusting for differences in general intellectual ability and basic neurocognitive functions. Affect recognition deficit may be a relatively independent impairment in BD rather than consequences arising from deficits in other basic neurocognition. The impairments of manic patients in the recognition of negative emotions, specifically anger, may further our understanding of core clinical psychopathology of BD and have implications in treating bipolar patients across distinct mood phases. © 2013 Elsevier B.V. All rights reserved.
Laukka, Petri; Neiberg, Daniel; Elfenbein, Hillary Anger
2014-06-01
The possibility of cultural differences in the fundamental acoustic patterns used to express emotion through the voice is an unanswered question central to the larger debate about the universality versus cultural specificity of emotion. This study used emotionally inflected standard-content speech segments expressing 11 emotions produced by 100 professional actors from 5 English-speaking cultures. Machine learning simulations were employed to classify expressions based on their acoustic features, using conditions where training and testing were conducted on stimuli coming from either the same or different cultures. A wide range of emotions were classified with above-chance accuracy in cross-cultural conditions, suggesting vocal expressions share important characteristics across cultures. However, classification showed an in-group advantage with higher accuracy in within- versus cross-cultural conditions. This finding demonstrates cultural differences in expressive vocal style, and supports the dialect theory of emotions according to which greater recognition of expressions from in-group members results from greater familiarity with culturally specific expressive styles.
Memory for faces and voices varies as a function of sex and expressed emotion.
S Cortes, Diana; Laukka, Petri; Lindahl, Christina; Fischer, Håkan
2017-01-01
We investigated how memory for faces and voices (presented separately and in combination) varies as a function of sex and emotional expression (anger, disgust, fear, happiness, sadness, and neutral). At encoding, participants judged the expressed emotion of items in forced-choice tasks, followed by incidental Remember/Know recognition tasks. Results from 600 participants showed that accuracy (hits minus false alarms) was consistently higher for neutral compared to emotional items, whereas accuracy for specific emotions varied across the presentation modalities (i.e., faces, voices, and face-voice combinations). For the subjective sense of recollection ("remember" hits), neutral items received the highest hit rates only for faces, whereas for voices and face-voice combinations anger and fear expressions instead received the highest recollection rates. We also observed better accuracy for items by female expressers, and own-sex bias where female participants displayed memory advantage for female faces and face-voice combinations. Results further suggest that own-sex bias can be explained by recollection, rather than familiarity, rates. Overall, results show that memory for faces and voices may be influenced by the expressions that they carry, as well as by the sex of both items and participants. Emotion expressions may also enhance the subjective sense of recollection without enhancing memory accuracy.
Memory for faces and voices varies as a function of sex and expressed emotion
Laukka, Petri; Lindahl, Christina; Fischer, Håkan
2017-01-01
We investigated how memory for faces and voices (presented separately and in combination) varies as a function of sex and emotional expression (anger, disgust, fear, happiness, sadness, and neutral). At encoding, participants judged the expressed emotion of items in forced-choice tasks, followed by incidental Remember/Know recognition tasks. Results from 600 participants showed that accuracy (hits minus false alarms) was consistently higher for neutral compared to emotional items, whereas accuracy for specific emotions varied across the presentation modalities (i.e., faces, voices, and face-voice combinations). For the subjective sense of recollection (“remember” hits), neutral items received the highest hit rates only for faces, whereas for voices and face-voice combinations anger and fear expressions instead received the highest recollection rates. We also observed better accuracy for items by female expressers, and own-sex bias where female participants displayed memory advantage for female faces and face-voice combinations. Results further suggest that own-sex bias can be explained by recollection, rather than familiarity, rates. Overall, results show that memory for faces and voices may be influenced by the expressions that they carry, as well as by the sex of both items and participants. Emotion expressions may also enhance the subjective sense of recollection without enhancing memory accuracy. PMID:28570691
Platt, Bradley; Kamboj, Sunjeev; Morgan, Celia J A; Curran, H Valerie
2010-11-01
While heavy cannabis-users seem to show various cognitive impairments, it remains unclear whether they also experience significant deficits in affective functioning. Evidence of such deficits may contribute to our understanding of the interpersonal difficulties in cannabis-users, and the link between cannabis-use and psychological disorders (Moore et al., 2007). Emotion recognition performance of heavy cannabis-users and non-using controls was compared. A measure of emotion recognition was used in which participants identified facial expressions as they changed from neutral (open-mouth) to gradually more intense expressions of sadness, neutral, anger or happiness (open or closed mouth). Reaction times and accuracy were recorded as the facial expressions changed. Participants also completed measures of 'theory of mind,' depression and impulsivity. Cannabis-users were significantly slower than controls at identifying all three emotional expressions. There was no difference between groups in identifying facial expressions changing from open-mouth neutral expressions to closed-mouth neutral expressions suggesting that differences in emotion recognition were not due to a general slowing of reaction times. Cannabis-users were also significantly more liberal in their response criterion for recognising sadness. Heavy cannabis-use may be associated with affect recognition deficits. In particular, a greater intensity of emotion expression was required before identification of positive and negative emotions. This was found using stimuli which simulated dynamic changes in emotion expression, and in turn, suggests that cannabis-users may experience generalised problems in decoding basic emotions during social interactions. The implications of these findings are discussed for vulnerability to psychological and interpersonal difficulties in cannabis-users. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Heterogeneity of long-history migration predicts emotion recognition accuracy.
Wood, Adrienne; Rychlowska, Magdalena; Niedenthal, Paula M
2016-06-01
Recent work (Rychlowska et al., 2015) demonstrated the power of a relatively new cultural dimension, historical heterogeneity, in predicting cultural differences in the endorsement of emotion expression norms. Historical heterogeneity describes the number of source countries that have contributed to a country's present-day population over the last 500 years. People in cultures originating from a large number of source countries may have historically benefited from greater and clearer emotional expressivity, because they lacked a common language and well-established social norms. We therefore hypothesized that in addition to endorsing more expressive display rules, individuals from heterogeneous cultures will also produce facial expressions that are easier to recognize by people from other cultures. By reanalyzing cross-cultural emotion recognition data from 92 papers and 82 cultures, we show that emotion expressions of people from heterogeneous cultures are more easily recognized by observers from other cultures than are the expressions produced in homogeneous cultures. Heterogeneity influences expression recognition rates alongside the individualism-collectivism of the perceivers' culture, as more individualistic cultures were more accurate in emotion judgments than collectivistic cultures. This work reveals the present-day behavioral consequences of long-term historical migration patterns and demonstrates the predictive power of historical heterogeneity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Laukka, Petri; Elfenbein, Hillary Anger; Thingujam, Nutankumar S; Rockstuhl, Thomas; Iraki, Frederick K; Chui, Wanda; Althoff, Jean
2016-11-01
This study extends previous work on emotion communication across cultures with a large-scale investigation of the physical expression cues in vocal tone. In doing so, it provides the first direct test of a key proposition of dialect theory, namely that greater accuracy of detecting emotions from one's own cultural group-known as in-group advantage-results from a match between culturally specific schemas in emotional expression style and culturally specific schemas in emotion recognition. Study 1 used stimuli from 100 professional actors from five English-speaking nations vocally conveying 11 emotional states (anger, contempt, fear, happiness, interest, lust, neutral, pride, relief, sadness, and shame) using standard-content sentences. Detailed acoustic analyses showed many similarities across groups, and yet also systematic group differences. This provides evidence for cultural accents in expressive style at the level of acoustic cues. In Study 2, listeners evaluated these expressions in a 5 × 5 design balanced across groups. Cross-cultural accuracy was greater than expected by chance. However, there was also in-group advantage, which varied across emotions. A lens model analysis of fundamental acoustic properties examined patterns in emotional expression and perception within and across groups. Acoustic cues were used relatively similarly across groups both to produce and judge emotions, and yet there were also subtle cultural differences. Speakers appear to have a culturally nuanced schema for enacting vocal tones via acoustic cues, and perceivers have a culturally nuanced schema in judging them. Consistent with dialect theory's prediction, in-group judgments showed a greater match between these schemas used for emotional expression and perception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Liu, Yi-Hung; Wu, Chien-Te; Cheng, Wei-Teng; Hsiao, Yu-Tsung; Chen, Po-Ming; Teng, Jyh-Tong
2014-01-01
Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher's discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher's emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods. PMID:25061837
Liu, Yi-Hung; Wu, Chien-Te; Cheng, Wei-Teng; Hsiao, Yu-Tsung; Chen, Po-Ming; Teng, Jyh-Tong
2014-07-24
Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher's discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher's emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods.
Misinterpretation of facial expression: a cross-cultural study.
Shioiri, T; Someya, T; Helmeste, D; Tang, S W
1999-02-01
Accurately recognizing facial emotional expressions is important in psychiatrist-versus-patient interactions. This might be difficult when the physician and patients are from different cultures. More than two decades of research on facial expressions have documented the universality of the emotions of anger, contempt, disgust, fear, happiness, sadness, and surprise. In contrast, some research data supported the concept that there are significant cultural differences in the judgment of emotion. In this pilot study, the recognition of emotional facial expressions in 123 Japanese subjects was evaluated using the Japanese and Caucasian Facial Expression of Emotion (JACFEE) photos. The results indicated that Japanese subjects experienced difficulties in recognizing some emotional facial expressions and misunderstood others as depicted by the posers, when compared to previous studies using American subjects. Interestingly, the sex and cultural background of the poser did not appear to influence the accuracy of recognition. The data suggest that in this young Japanese sample, judgment of certain emotional facial expressions was significantly different from the Americans. Further exploration in this area is warranted due to its importance in cross-cultural clinician-patient interactions.
Indersmitten, Tim; Gur, Ruben C
2003-05-01
Since the discovery of facial asymmetries in emotional expressions of humans and other primates, hypotheses have related the greater left-hemiface intensity to right-hemispheric dominance in emotion processing. However, the difficulty of creating true frontal views of facial expressions in two-dimensional photographs has confounded efforts to better understand the phenomenon. We have recently described a method for obtaining three-dimensional photographs of posed and evoked emotional expressions and used these stimuli to investigate both intensity of expression and accuracy of recognizing emotion in chimeric faces constructed from only left- or right-side composites. The participant population included 38 (19 male, 19 female) African-American, Caucasian, and Asian adults. They were presented with chimeric composites generated from faces of eight actors and eight actresses showing four emotions: happiness, sadness, anger, and fear, each in posed and evoked conditions. We replicated the finding that emotions are expressed more intensely in the left hemiface for all emotions and conditions, with the exception of evoked anger, which was expressed more intensely in the right hemiface. In contrast, the results indicated that emotional expressions are recognized more efficiently in the right hemiface, indicating that the right hemiface expresses emotions more accurately. The double dissociation between the laterality of expression intensity and that of recognition efficiency supports the notion that the two kinds of processes may have distinct neural substrates. Evoked anger is uniquely expressed more intensely and accurately on the side of the face that projects to the viewer's right hemisphere, dominant in emotion recognition.
Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug
2018-04-30
The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.
Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha
2015-09-01
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.
Unsupervised learning of facial emotion decoding skills.
Huelle, Jan O; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke
2014-01-01
Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant's response or the sender's true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practice effects often observed in cognitive tasks.
Unsupervised learning of facial emotion decoding skills
Huelle, Jan O.; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke
2013-01-01
Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant’s response or the sender’s true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple visual stimuli described in previous studies and practice effects often observed in cognitive tasks. PMID:24578686
Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.
Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L
2018-01-01
Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p = 0.04; d = 0.34), had increased response bias toward fear ( p = 0.03), and increased response time to facial emotions as a whole ( p = 0.02; d = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.
Autonomic imbalance is associated with reduced facial recognition in somatoform disorders.
Pollatos, Olga; Herbert, Beate M; Wankner, Sarah; Dietel, Anja; Wachsmuth, Cornelia; Henningsen, Peter; Sack, Martin
2011-10-01
Somatoform disorders are characterized by the presence of multiple somatic symptoms. While the accuracy of perceiving bodily signal (interoceptive awareness) is only sparely investigated in somatoform disorders, recent research has associated autonomic imbalance with cognitive and emotional difficulties in stress-related diseases. This study aimed to investigate how sympathovagal reactivity interacts with performance in recognizing emotions in faces (facial recognition task). Using a facial recognition and appraisal task, skin conductance levels (SCLs), heart rate (HR) and heart rate variability (HRV) were assessed in 26 somatoform patients and compared to healthy controls. Interoceptive awareness was assessed by a heartbeat detection task. We found evidence for a sympathovagal imbalance in somatoform disorders characterized by low parasympathetic reactivity during emotional tasks and increased sympathetic activation during baseline. Somatoform patients exhibited a reduced recognition performance for neutral and sad emotional expressions only. Possible confounding variables such as alexithymia, anxiety or depression were taken into account. Interoceptive awareness was reduced in somatoform patients. Our data demonstrate an imbalance in sympathovagal activation in somatoform disorders associated with decreased parasympathetic activation. This might account for difficulties in processing of sad and neutral facial expressions in somatoform patients which might be a pathogenic mechanism for increased everyday vulnerability. Copyright © 2011 Elsevier Inc. All rights reserved.
Emotion recognition from EEG using higher order crossings.
Petrantonakis, Panagiotis C; Hadjileontiadis, Leontios J
2010-03-01
Electroencephalogram (EEG)-based emotion recognition is a relatively new field in the affective computing area with challenging issues regarding the induction of the emotional states and the extraction of the features in order to achieve optimum classification performance. In this paper, a novel emotion evocation and EEG-based feature extraction technique is presented. In particular, the mirror neuron system concept was adapted to efficiently foster emotion induction by the process of imitation. In addition, higher order crossings (HOC) analysis was employed for the feature extraction scheme and a robust classification method, namely HOC-emotion classifier (HOC-EC), was implemented testing four different classifiers [quadratic discriminant analysis (QDA), k-nearest neighbor, Mahalanobis distance, and support vector machines (SVMs)], in order to accomplish efficient emotion recognition. Through a series of facial expression image projection, EEG data have been collected by 16 healthy subjects using only 3 EEG channels, namely Fp1, Fp2, and a bipolar channel of F3 and F4 positions according to 10-20 system. Two scenarios were examined using EEG data from a single-channel and from combined-channels, respectively. Compared with other feature extraction methods, HOC-EC appears to outperform them, achieving a 62.3% (using QDA) and 83.33% (using SVM) classification accuracy for the single-channel and combined-channel cases, respectively, differentiating among the six basic emotions, i.e., happiness, surprise, anger, fear, disgust, and sadness. As the emotion class-set reduces its dimension, the HOC-EC converges toward maximum classification rate (100% for five or less emotions), justifying the efficiency of the proposed approach. This could facilitate the integration of HOC-EC in human machine interfaces, such as pervasive healthcare systems, enhancing their affective character and providing information about the user's emotional status (e.g., identifying user's emotion experiences, recurring affective states, time-dependent emotional trends).
Selecting fillers on emotional appearance improves lineup identification accuracy.
Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F
2014-12-01
Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Facial emotion labeling in unaffected offspring of adults with bipolar I disorder.
Sharma, Aditya Narain; Barron, Evelyn; Le Couteur, James; Close, Andrew; Rushton, Steven; Grunze, Heinz; Kelly, Thomas; Nicol Ferrier, Ian; Le Couteur, Ann Simone
2017-01-15
Young people 'at risk' for developing Bipolar Disorder have been shown to have deficits in facial emotion labeling across emotions with some studies reporting deficits for one or more particular emotions. However, these have included a heterogeneous group of young people (siblings of adolescents and offspring of adults with bipolar disorder), who have themselves diagnosed psychopathology (mood disorders and neurodevelopmental disorders including ADHD). 24 offspring of adults with bipolar I disorder and 34 offspring of healthy controls were administered the Diagnostic Analysis of Non Verbal Accuracy 2 (DANVA 2) to investigate the ability of participants to correctly label 4 emotions: happy, sad, fear and anger using both child and adult faces as stimuli at low and high intensity. Mixed effects modelling revealed that the offspring of adults with bipolar I disorder made more errors in both the overall recognition of facial emotions and the specific recognition of fear compared with the offspring of healthy controls. Further more errors were made by offspring that were male, younger in age and also in recognition of emotions using 'child' stimuli. The sample size, lack of blinding of the study team and the absence of any stimuli that assess subjects' response to a neutral emotional stimulus are limitations of the study. Offspring (with no history of current or past psychopathology or psychotropic medication) of adults with bipolar I disorder displayed facial emotion labeling deficits (particularly fear) suggesting facial emotion labeling may be an endophenotype for bipolar disorder. Copyright © 2016 Elsevier B.V. All rights reserved.
MacDonald, P M; Kirkpatrick, S W; Sullivan, L A
1996-11-01
Schematic drawings of facial expressions were evaluated as a possible assessment tool for research on emotion recognition and interpretation involving young children. A subset of Ekman and Friesen's (1976) Pictures of Facial Affect was used as the standard for comparison. Preschool children (N = 138) were shown drawing and photographs in two context conditions for six emotions (anger, disgust, fear, happiness, sadness, and surprise). The overall correlation between accuracy for the photographs and drawings was .677. A significant difference was found for the stimulus condition (photographs vs. drawings) but not for the administration condition (label-based vs. context-based). Children were significantly more accurate in interpreting drawings than photographs and tended to be more accurate in identifying facial expressions in the label-based administration condition for both photographs and drawings than in the context-based administration condition.
Stress and emotional valence effects on children's versus adolescents' true and false memory.
Quas, Jodi A; Rush, Elizabeth B; Yim, Ilona S; Edelstein, Robin S; Otgaar, Henry; Smeets, Tom
2016-01-01
Despite considerable interest in understanding how stress influences memory accuracy and errors, particularly in children, methodological limitations have made it difficult to examine the effects of stress independent of the effects of the emotional valence of to-be-remembered information in developmental populations. In this study, we manipulated stress levels in 7-8- and 12-14-year-olds and then exposed them to negative, neutral, and positive word lists. Shortly afterward, we tested their recognition memory for the words and false memory for non-presented but related words. Adolescents in the high-stress condition were more accurate than those in the low-stress condition, while children's accuracy did not differ across stress conditions. Also, among adolescents, accuracy and errors were higher for the negative than positive words, while in children, word valence was unrelated to accuracy. Finally, increases in children's and adolescents' cortisol responses, especially in the high-stress condition, were related to greater accuracy but not false memories and only for positive emotional words. Findings suggest that stress at encoding, as well as the emotional content of to-be-remembered information, may influence memory in different ways across development, highlighting the need for greater complexity in existing models of true and false memory formation.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
How Mood and Task Complexity Affect Children's Recognition of Others’ Emotions
Cummings, Andrew J.; Rennels, Jennifer L.
2013-01-01
Previous studies examined how mood affects children's accuracy in matching emotional expressions and labels (label-based tasks). This study was the first to assess how induced mood (positive, neutral, or negative) influenced 5- to 8-year-olds’ accuracy and reaction time using both context-based tasks, which required inferring a character's emotion from a vignette, and label-based tasks. Both tasks required choosing one of four facial expressions to respond. Children responded more accurately to label-based questions relative to context-based questions at 5 to 7 years of age, but showed no differences at 8 years of age, and when the emotional expression being identified was happiness, sadness, or surprise, but not disgust. For the context-based questions, children were more accurate at inferring sad and disgusted emotions compared to happy and surprised emotions. Induced positive mood facilitated 5-year-olds’ processing (decreased reaction time) in both tasks compared to induced negative and neutral moods. Results demonstrate how task type and children's mood influence children's emotion processing at different ages. PMID:24489442
Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.
Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko
2017-07-01
Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.
Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel
2013-01-01
Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive interventions. PMID:23805122
Emotion categories and dimensions in the facial communication of affect: An integrated approach.
Mehu, Marc; Scherer, Klaus R
2015-12-01
We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements. (c) 2015 APA, all rights reserved).
Transgenerational effects of maternal depression on affect recognition in children.
Kluczniok, Dorothea; Hindi Attar, Catherine; Fydrich, Thomas; Fuehrer, Daniel; Jaite, Charlotte; Domes, Gregor; Winter, Sibylle; Herpertz, Sabine C; Brunner, Romuald; Boedeker, Katja; Bermpohl, Felix
2016-01-01
The association between maternal depression and adverse emotional and behavioral outcomes in children is well established. One associated factor might be altered affect recognition which may be transmitted transgenerationally. Individuals with history of depression show biased recognition of sadness. Our aim was to investigate parallels in maternal and children's affect recognition with remitted depressed mothers. 60 Mother-child dyads completed an affect recognition morphing task. We examined two groups of remitted depressed mothers, with and without history of physical or sexual abuse, and a group of healthy mothers without history of physical or sexual abuse. Children were between 5 and 12 years old. Across groups, mothers identified happy faces fastest. Mothers with remitted depression showed a higher accuracy and response bias for sadness. We found corresponding results in their children. Maternal and children's bias and accuracy for sadness were positively correlated. Effects of remitted depression were found independent of maternal history of physical or sexual abuse. Our sample size was relatively small and further longitudinal research is needed to investigate how maternal and children's affect recognition are associated with behavioral and emotional outcomes in the long term. Our data suggest a negative processing bias in mothers with remitted depression which might represent both the perpetuation of and vulnerability to depression. Children of remitted depressed mothers appear to be exposed to this processing bias outside acute depressive episodes. This may promote the development of a corresponding processing bias in the children and could make children of depressed mothers more vulnerable to depressive disorders themselves. Copyright © 2015 Elsevier B.V. All rights reserved.
Emotional memory processing is influenced by sleep quality.
Tempesta, Daniela; De Gennaro, Luigi; Natale, Vincenzo; Ferrara, Michele
2015-07-01
The recall of emotional memory is enhanced after sleep and is hindered by sleep deprivation. We used an emotional memory task to assess whether poor sleep quality, as well as sleep deprivation, may influence the accuracy of memory recognition, but also the affective tone associated with the memory. Seventy-five subjects, divided into poor sleeper (PS), good sleeper (GS), and sleep deprivation (SD) groups, completed two recall (R) sessions: R1, 1 h after the encoding phase; and R2, after one night of sleep for PS and GS groups and after one night of sleep deprivation for the SD group. During the encoding phase, the participants rated valence and arousal of 90 pictures. During R1 and R2, the participants first made a yes/no memory judgment of the 45 target pictures intermingled with 30 non-target pictures, then rated valence and arousal of each picture. Recognition accuracy was higher for the PS and GS groups compared to the SD group for all pictures. Emotional valence of the remembered pictures was more negative after sleep deprivation and poor quality sleep, while it was preserved after a good sleep. These results provide the first evidence that poor sleep quality negatively affects emotional valence of memories, within the context of preserved emotional memory consolidation. It is suggested that low sleep quality and lack of sleep may impose a more negative affective tone to memories. The reported effects are not to be ascribed to depressive mood, but to a specific influence of poor sleep quality. Copyright © 2015 Elsevier B.V. All rights reserved.
Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P
2015-08-01
Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report that their symptoms started in childhood, suggesting that BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition both in children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7-26 years) and HC participants (n = 87; ages 7-25 years). Complementary analyses investigated errors for child and adult faces. A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred both for child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target - that is, for cognitive remediation to improve BD youths' emotion recognition abilities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P
2015-01-01
Objectives Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report their symptoms started in childhood, suggesting BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition in both children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Methods Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7–26 years) and HC participants (n = 87; ages 7–25 years). Complementary analyses investigated errors for child and adult faces. Results A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred for both child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Conclusions Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target, i.e., for cognitive remediation to improve BD youths’ emotion recognition abilities. PMID:25951752
Attentional biases and memory for emotional stimuli in men and male rhesus monkeys.
Lacreuse, Agnès; Schatz, Kelly; Strazzullo, Sarah; King, Hanna M; Ready, Rebecca
2013-11-01
We examined attentional biases for social and non-social emotional stimuli in young adult men and compared the results to those of male rhesus monkeys (Macaca mulatta) previously tested in a similar dot-probe task (King et al. in Psychoneuroendocrinology 37(3):396-409, 2012). Recognition memory for these stimuli was also analyzed in each species, using a recognition memory task in humans and a delayed non-matching-to-sample task in monkeys. We found that both humans and monkeys displayed a similar pattern of attentional biases toward threatening facial expressions of conspecifics. The bias was significant in monkeys and of marginal significance in humans. In addition, humans, but not monkeys, exhibited an attentional bias away from negative non-social images. Attentional biases for social and non-social threat differed significantly, with both species showing a pattern of vigilance toward negative social images and avoidance of negative non-social images. Positive stimuli did not elicit significant attentional biases for either species. In humans, emotional content facilitated the recognition of non-social images, but no effect of emotion was found for the recognition of social images. Recognition accuracy was not affected by emotion in monkeys, but response times were faster for negative relative to positive images. Altogether, these results suggest shared mechanisms of social attention in humans and monkeys, with both species showing a pattern of selective attention toward threatening faces of conspecifics. These data are consistent with the view that selective vigilance to social threat is the result of evolutionary constraints. Yet, selective attention to threat was weaker in humans than in monkeys, suggesting that regulatory mechanisms enable non-anxious humans to reduce sensitivity to social threat in this paradigm, likely through enhanced prefrontal control and reduced amygdala activation. In addition, the findings emphasize important differences in attentional biases to social versus non-social threat in both species. Differences in the impact of emotional stimuli on recognition memory between monkeys and humans will require further study, as methodological differences in the recognition tasks may have affected the results.
Long-term effects of child abuse and neglect on emotion processing in adulthood.
Young, Joanna Cahall; Widom, Cathy Spatz
2014-08-01
To determine whether child maltreatment has a long-term impact on emotion processing abilities in adulthood and whether IQ, psychopathology, or psychopathy mediate the relationship between childhood maltreatment and emotion processing in adulthood. Using a prospective cohort design, children (ages 0-11) with documented cases of abuse and neglect during 1967-1971 were matched with non-maltreated children and followed up into adulthood. Potential mediators (IQ, Post-Traumatic Stress [PTSD], Generalized Anxiety [GAD], Dysthymia, and Major Depressive [MDD] Disorders, and psychopathy) were assessed in young adulthood with standardized assessment techniques. In middle adulthood (Mage=47), the International Affective Picture System was used to measure emotion processing. Structural equation modeling was used to test mediation models. Individuals with a history of childhood maltreatment were less accurate in emotion processing overall and in processing positive and neutral pictures than matched controls. Childhood physical abuse predicted less accuracy in neutral pictures and childhood sexual abuse and neglect predicted less accuracy in recognizing positive pictures. MDD, GAD, and IQ predicted overall picture recognition accuracy. However, of the mediators examined, only IQ acted to mediate the relationship between child maltreatment and emotion processing deficits. Although research has focused on emotion processing in maltreated children, these new findings show an impact child abuse and neglect on emotion processing in middle adulthood. Research and interventions aimed at improving emotional processing deficiencies in abused and neglected children should consider the role of IQ. Copyright © 2014 Elsevier Ltd. All rights reserved.
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Earles, Julie L; Kersten, Alan W; Vernon, Laura L; Starkings, Rachel
2016-01-01
When remembering an event, it is important to remember both the features of the event (e.g., a person and an action) and the connections among features (e.g., who performed which action). Emotion often enhances memory for stimulus features, but the relationship between emotion and the binding of features in memory is unclear. Younger and older adults attempted to remember events in which a person performed a negative, positive or neutral action. Memory for the action was enhanced by emotion, but emotion did not enhance the ability of participants to remember which person performed which action. Older adults were more likely than younger adults to make binding errors in which they incorrectly remembered a familiar actor performing a familiar action that had actually been performed by someone else, and this age-related associative deficit was found for both neutral and emotional actions. Emotion not only increased correct recognition of old events for older and younger adults but also increased false recognition of events in which a familiar actor performed a familiar action that had been performed by someone else. Thus, although emotion may enhance memory for the features of an event, it does not increase the accuracy of remembering who performed which action.
A Multimodal Emotion Detection System during Human-Robot Interaction
Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.
2013-01-01
In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598
Menstrual-cycle dependent fluctuations in ovarian hormones affect emotional memory.
Bayer, Janine; Schultz, Heidrun; Gamer, Matthias; Sommer, Tobias
2014-04-01
The hormones progesterone and estradiol modulate neural plasticity in the hippocampus, the amygdala and the prefrontal cortex. These structures are involved in the superior memory for emotionally arousing information (EEM effects). Therefore, fluctuations in hormonal levels across the menstrual cycle are expected to influence activity in these areas as well as behavioral memory performance for emotionally arousing events. To test this hypothesis, naturally cycling women underwent functional magnetic resonance imaging during the encoding of emotional and neutral stimuli in the low-hormone early follicular and the high-hormone luteal phase. Their memory was tested after an interval of 48 h, because emotional arousal primarily enhances the consolidation of new memories. Whereas overall recognition accuracy remained stable across cycle phases, recognition quality varied with menstrual cycle phases. Particularly recollection-based recognition memory for negative items tended to decrease from early follicular to luteal phase. EEM effects for both valences were associated with higher activity in the right anterior hippocampus during early follicular compared to luteal phase. Valence-specific modulations were found in the anterior cingulate, the amygdala and the posterior hippocampus. Current findings connect to anxiolytic actions of estradiol and progesterone as well as to studies on fear conditioning. Moreover, they are in line with differential networks involved in EEM effects for positive and negative items. Copyright © 2014 Elsevier Inc. All rights reserved.
Facial affect recognition in early and late-stage schizophrenia patients.
Romero-Ferreiro, María Verónica; Aguado, Luis; Rodriguez-Torresano, Javier; Palomo, Tomás; Rodriguez-Jimenez, Roberto; Pedreira-Massa, José Luis
2016-04-01
Prior studies have shown deficits in social cognition and emotion perception in first-episode psychosis (FEP) and multi-episode schizophrenia (MES) patients. These studies compared patients at different stages of the illness with only a single control group which differed in age from at least one clinical group. The present study provides new evidence of a differential pattern of deficit in facial affect recognition in FEP and MES patients using a double age-matched control design. Compared to their controls, FEP patients only showed impaired recognition of fearful faces (p=.007). In contrast to this, the MES patients showed a more generalized deficit compared to their age-matched controls, with impaired recognition of angry, sad and fearful faces (ps<.01) and an increased misattribution of emotional meaning to neutral faces. PANSS scores of FEP patients on Depressed factor correlated positively with the accuracy to recognize fearful expressions (r=.473). For the MES group fear recognition correlated positively with negative PANSS factor (r=.498) and recognition of sad and neutral expressions was inversely correlated with disorganized PANSS factor (r=-.461 and r=-.541, respectively). These results provide evidence that a generalized impairment of affect recognition is observed in advanced-stage patients and is not characteristic of the early stages of schizophrenia. Moreover, the finding that anomalous attribution of emotional meaning to neutral faces is observed only in MES patients suggests that an increased attribution of salience to social stimuli is a characteristic of social cognition in advanced stages of the disorder. Copyright © 2016 Elsevier B.V. All rights reserved.
Park, Sungwon; Kim, Kiwoong
2011-12-01
The present study aimed to investigate the physiological reactivity and recognition to emotional stimuli in outpatients with schizophrenia and in healthy controls. Skin conductance response, skin conductance level, heart rate, respiration, corrugator muscle, and orbicularis muscle were all measured using five emotion-eliciting film clips. The patients reported lower intensity of experienced anger and disgust than controls. The patient and control groups did not differ in accuracy to recognize emotions except anger. Anger, fear, amusement, and sadness had a discriminative effect on physiological responses in the two groups. These findings provide helpful physiological evidence influenced by harmful or favorable emotional stimuli. Future directions may include to clarify how physiological reactivity and subject experience to emotion are related to their functioning. 2011 Elsevier Inc. All rights reserved.
Lee, Jung Suk; Chun, Ji Won; Kang, Jee In; Kang, Dong-Il; Park, Hae-Jeong; Kim, Jae-Jin
2012-07-30
Emotional memory dysfunction may be associated with anhedonia in schizophrenia. This study aimed to investigate the neurobiological basis of emotional memory and its relationship with anhedonia in schizophrenia specifically in emotional memory relate brain regions of interest (ROIs) including the amygdala, hippocampus, nucleus accumbens, and ventromedial prefrontal cortex. Fourteen patients with schizophrenia and 16 healthy subjects performed a word-image associative encoding task, during which a neutral word was presented with a positive, neutral, or control image. Subjects underwent functional magnetic resonance imaging while performing the recognition task. Correlation analyses were performed between the percent signal change (PSC) in the ROIs and the anhedonia scores. We found no group differences in recognition accuracy and reaction time. The PSC of the hippocampus in the positive and neutral conditions, and the PSC in the nucleus accumbens in the control condition, appeared to be negatively correlated with the Physical Anhedonia Scale (PAS) scores in patients with schizophrenia, while significant correlations with the PAS scores were not observed in healthy subjects. This study provides further evidences of the role of the hippocampus and nucleus accumbens in trait physical anhedonia and possible associations between emotional memory deficit and trait physical anhedonia in patients with schizophrenia. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Emotion processing biases and resting EEG activity in depressed adolescents
Auerbach, Randy P.; Stewart, Jeremy G.; Stanton, Colin H.; Mueller, Erik M.; Pizzagalli, Diego A.
2015-01-01
Background While theorists have posited that adolescent depression is characterized by emotion processing biases (greater propensity to identify sad than happy facial expressions), findings have been mixed. Additionally, the neural correlates associated with putative emotion processing biases remain largely unknown. Our aim was to identify emotion processing biases in depressed adolescents and examine neural abnormalities related to these biases using high-density resting EEG and source localization. Methods Healthy (n = 36) and depressed (n = 23) female adolescents, aged 13–18 years, completed a facial recognition task in which they identified happy, sad, fear, and angry expressions across intensities from 10% (low) to 100% (high). Additionally, 128-channel resting (i.e., task-free) EEG was recorded and analyzed using a distributed source localization technique (LORETA). Given research implicating the dorsolateral prefrontal cortex (DLPFC) in depression and emotion processing, analyses focused on this region. Results Relative to healthy youth, depressed adolescents were more accurate for sad and less accurate for happy, particularly low-intensity happy faces. No differences emerged for fearful or angry facial expressions. Further, LORETA analyses revealed greater theta and alpha current density (i.e., reduced brain activity) in depressed versus healthy adolescents, particularly in the left DLPFC (BA9/BA46). Theta and alpha current density were positively correlated, and greater current density predicted reduced accuracy for happy faces. Conclusion Depressed female adolescents were characterized by emotion processing biases in favor of sad emotions and reduced recognition of happiness, especially when cues of happiness were subtle. Blunted recognition of happy was associated with left DLPFC resting hypoactivity. PMID:26032684
An algorithm of improving speech emotional perception for hearing aid
NASA Astrophysics Data System (ADS)
Xi, Ji; Liang, Ruiyu; Fei, Xianju
2017-07-01
In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.
The effect of the social regulation of emotion on emotional long-term memory.
Flores, Luis E; Berenbaum, Howard
2017-04-01
Memories for emotional events tend to be stronger than for neutral events, and weakening negative memories can be helpful to promote well-being. The present study examined whether the social regulation of emotion (in the form of handholding) altered the strength of emotional long-term memory. A sample of 219 undergraduate students viewed sets of negative, neutral, and positive images. Each participant held a stress ball while viewing half of the images and held someone's hand while viewing the other half. Participants returned 1 week later to complete a recognition task. Performance on the recognition task demonstrated that participants had lower memory accuracy for negative but not for positive pictures that were shown while they were holding someone's hand compared with when they were holding a stress ball. Although handholding altered the strength of negative emotional long-term memory, it did not down-regulate negative affective response as measured by self-report or facial expressivity. The present findings provide evidence that the social regulation of emotion can help weaken memory for negative information. Given the role of strong negative memories in different forms of psychopathology (e.g., depression, posttraumatic stress disorder), these findings may help better understand how close relationships protect against psychopathology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The development of cross-cultural recognition of vocal emotion during childhood and adolescence.
Chronaki, Georgia; Wigelsworth, Michael; Pell, Marc D; Kotz, Sonja A
2018-06-14
Humans have an innate set of emotions recognised universally. However, emotion recognition also depends on socio-cultural rules. Although adults recognise vocal emotions universally, they identify emotions more accurately in their native language. We examined developmental trajectories of universal vocal emotion recognition in children. Eighty native English speakers completed a vocal emotion recognition task in their native language (English) and foreign languages (Spanish, Chinese, and Arabic) expressing anger, happiness, sadness, fear, and neutrality. Emotion recognition was compared across 8-to-10, 11-to-13-year-olds, and adults. Measures of behavioural and emotional problems were also taken. Results showed that although emotion recognition was above chance for all languages, native English speaking children were more accurate in recognising vocal emotions in their native language. There was a larger improvement in recognising vocal emotion from the native language during adolescence. Vocal anger recognition did not improve with age for the non-native languages. This is the first study to demonstrate universality of vocal emotion recognition in children whilst supporting an "in-group advantage" for more accurate recognition in the native language. Findings highlight the role of experience in emotion recognition, have implications for child development in modern multicultural societies and address important theoretical questions about the nature of emotions.
Affect Recognition in Adults with Attention-Deficit/Hyperactivity Disorder
Miller, Meghan; Hanford, Russell B.; Fassbender, Catherine; Duke, Marshall; Schweitzer, Julie B.
2014-01-01
Objective This study compared affect recognition abilities between adults with and without Attention-Deficit/Hyperactivity Disorder (ADHD). Method The sample included 51 participants (34 men, 17 women) divided into 3 groups: ADHD-Combined Type (ADHD-C; n = 17), ADHD-Predominantly Inattentive Type (ADHD-I; n = 16), and controls (n = 18). The mean age was 34 years. Affect recognition abilities were assessed by the Diagnostic Analysis of Nonverbal Accuracy (DANVA). Results Analyses of Variance showed that the ADHD-I group made more fearful emotion errors relative to the control group. Inattentive symptoms were positively correlated while hyperactive-impulsive symptoms were negatively correlated with affect recognition errors. Conclusion These results suggest that affect recognition abilities may be impaired in adults with ADHD and that affect recognition abilities are more adversely affected by inattentive than hyperactive-impulsive symptoms. PMID:20555036
Hatamikia, Sepideh; Maghooli, Keivan; Nasrabadi, Ali Motie
2014-01-01
Electroencephalogram (EEG) is one of the useful biological signals to distinguish different brain diseases and mental states. In recent years, detecting different emotional states from biological signals has been merged more attention by researchers and several feature extraction methods and classifiers are suggested to recognize emotions from EEG signals. In this research, we introduce an emotion recognition system using autoregressive (AR) model, sequential forward feature selection (SFS) and K-nearest neighbor (KNN) classifier using EEG signals during emotional audio-visual inductions. The main purpose of this paper is to investigate the performance of AR features in the classification of emotional states. To achieve this goal, a distinguished AR method (Burg's method) based on Levinson-Durbin's recursive algorithm is used and AR coefficients are extracted as feature vectors. In the next step, two different feature selection methods based on SFS algorithm and Davies–Bouldin index are used in order to decrease the complexity of computing and redundancy of features; then, three different classifiers include KNN, quadratic discriminant analysis and linear discriminant analysis are used to discriminate two and three different classes of valence and arousal levels. The proposed method is evaluated with EEG signals of available database for emotion analysis using physiological signals, which are recorded from 32 participants during 40 1 min audio visual inductions. According to the results, AR features are efficient to recognize emotional states from EEG signals, and KNN performs better than two other classifiers in discriminating of both two and three valence/arousal classes. The results also show that SFS method improves accuracies by almost 10-15% as compared to Davies–Bouldin based feature selection. The best accuracies are %72.33 and %74.20 for two classes of valence and arousal and %61.10 and %65.16 for three classes, respectively. PMID:25298928
Textual emotion recognition for enhancing enterprise computing
NASA Astrophysics Data System (ADS)
Quan, Changqin; Ren, Fuji
2016-05-01
The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.
Kamboj, Sunjeev K; Joye, Alyssa; Bisby, James A; Das, Ravi K; Platt, Bradley; Curran, H Valerie
2013-05-01
Studies of affect recognition can inform our understanding of the interpersonal effects of alcohol and help develop a more complete neuropsychological profile of this drug. The objective of the study was to examine affect recognition in social drinkers using a novel dynamic affect-recognition task, sampling performance across a range of evolutionarily significant target emotions and neutral expressions. Participants received 0, 0.4 or 0.8 g/kg alcohol in a double-blind, independent groups design. Relatively naturalistic changes in facial expression-from neutral (mouth open) to increasing intensities of target emotions, as well as neutral (mouth closed)-were simulated using computer-generated dynamic morphs. Accuracy and reaction time were measured and a two-high-threshold model applied to hits and false-alarm data to determine sensitivity and response bias. While there was no effect on the principal emotion expressions (happiness, sadness, fear, anger and disgust), compared to those receiving 0.8 g/kg of alcohol and placebo, participants administered with 0.4 g/kg alcohol tended to show an enhanced response bias to neutral expressions. Exploration of this effect suggested an accompanying tendency to misattribute neutrality to sad expressions following the 0.4-g/kg dose. The 0.4-g/kg alcohol-but not 0.8 g/kg-produced a limited and specific modification in affect recognition evidenced by a neutral response bias and possibly an accompanying tendency to misclassify sad expressions as neutral. In light of previous findings on involuntary negative memory following the 0.4-g/kg dose, we suggest that moderate-but not high-doses of alcohol have a special relevance to emotional processing in social drinkers.
Wang, Kun-Ching
2015-01-14
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.
Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka
2014-01-01
Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.
Verma, Gyanendra K; Tiwary, Uma Shanker
2014-11-15
The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.
Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics.
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-21
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Functional architecture of visual emotion recognition ability: A latent variable approach.
Lewis, Gary J; Lefevre, Carmen E; Young, Andrew W
2016-05-01
Emotion recognition has been a focus of considerable attention for several decades. However, despite this interest, the underlying structure of individual differences in emotion recognition ability has been largely overlooked and thus is poorly understood. For example, limited knowledge exists concerning whether recognition ability for one emotion (e.g., disgust) generalizes to other emotions (e.g., anger, fear). Furthermore, it is unclear whether emotion recognition ability generalizes across modalities, such that those who are good at recognizing emotions from the face, for example, are also good at identifying emotions from nonfacial cues (such as cues conveyed via the body). The primary goal of the current set of studies was to address these questions through establishing the structure of individual differences in visual emotion recognition ability. In three independent samples (Study 1: n = 640; Study 2: n = 389; Study 3: n = 303), we observed that the ability to recognize visually presented emotions is based on different sources of variation: a supramodal emotion-general factor, supramodal emotion-specific factors, and face- and within-modality emotion-specific factors. In addition, we found evidence that general intelligence and alexithymia were associated with supramodal emotion recognition ability. Autism-like traits, empathic concern, and alexithymia were independently associated with face-specific emotion recognition ability. These results (a) provide a platform for further individual differences research on emotion recognition ability, (b) indicate that differentiating levels within the architecture of emotion recognition ability is of high importance, and (c) show that the capacity to understand expressions of emotion in others is linked to broader affective and cognitive processes. (c) 2016 APA, all rights reserved).
Shared mechanism for emotion processing in adolescents with and without autism
Ioannou, Christina; Zein, Marwa El; Wyart, Valentin; Scheid, Isabelle; Amsellem, Frédérique; Delorme, Richard; Chevallier, Coralie; Grèzes, Julie
2017-01-01
Although, the quest to understand emotional processing in individuals with Autism Spectrum Disorders (ASD) has led to an impressive number of studies, the picture that emerges from this research remains inconsistent. Some studies find that Typically Developing (TD) individuals outperform those with ASD in emotion recognition tasks, others find no such difference. In this paper, we move beyond focusing on potential group differences in behaviour to answer what we believe is a more pressing question: do individuals with ASD use the same mechanisms to process emotional cues? To this end, we rely on model-based analyses of participants’ accuracy during an emotion categorisation task in which displays of anger and fear are paired with direct vs. averted gaze. Behavioural data of 20 ASD and 20 TD adolescents revealed that the ASD group displayed lower overall performance. Yet, gaze direction had a similar impact on emotion categorisation in both groups, i.e. improved accuracy for salient combinations (anger-direct, fear-averted). Critically, computational modelling of participants’ behaviour reveals that the same mechanism, i.e. increased perceptual sensitivity, underlies the contextual impact of gaze in both groups. We discuss the specific experimental conditions that may favour emotion processing and the automatic integration of contextual information in ASD. PMID:28218248
Processing of Facial Emotion in Bipolar Depression and Euthymia.
Robinson, Lucy J; Gray, John M; Burt, Mike; Ferrier, I Nicol; Gallagher, Peter
2015-10-01
Previous studies of facial emotion processing in bipolar disorder (BD) have reported conflicting findings. In independently conducted studies, we investigate facial emotion labeling in euthymic and depressed BD patients using tasks with static and dynamically morphed images of different emotions displayed at different intensities. Study 1 included 38 euthymic BD patients and 28 controls. Participants completed two tasks: labeling of static images of basic facial emotions (anger, disgust, fear, happy, sad) shown at different expression intensities; the Eyes Test (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), which involves recognition of complex emotions using only the eye region of the face. Study 2 included 53 depressed BD patients and 47 controls. Participants completed two tasks: labeling of "dynamic" facial expressions of the same five basic emotions; the Emotional Hexagon test (Young, Perret, Calder, Sprengelmeyer, & Ekman, 2002). There were no significant group differences on any measures of emotion perception/labeling, compared to controls. A significant group by intensity interaction was observed in both emotion labeling tasks (euthymia and depression), although this effect did not survive the addition of measures of executive function/psychomotor speed as covariates. Only 2.6-15.8% of euthymic patients and 7.8-13.7% of depressed patients scored below the 10th percentile of the controls for total emotion recognition accuracy. There was no evidence of specific deficits in facial emotion labeling in euthymic or depressed BD patients. Methodological variations-including mood state, sample size, and the cognitive demands of the tasks-may contribute significantly to the variability in findings between studies.
Emotion Recognition in Frontotemporal Dementia and Alzheimer's Disease: A New Film-Based Assessment
Goodkind, Madeleine S.; Sturm, Virginia E.; Ascher, Elizabeth A.; Shdo, Suzanne M.; Miller, Bruce L.; Rankin, Katherine P.; Levenson, Robert W.
2015-01-01
Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests. PMID:26010574
Kumfor, Fiona; Irish, Muireann; Hodges, John R.; Piguet, Olivier
2013-01-01
Patients with frontotemporal dementia have pervasive changes in emotion recognition and social cognition, yet the neural changes underlying these emotion processing deficits remain unclear. The multimodal system model of emotion proposes that basic emotions are dependent on distinct brain regions, which undergo significant pathological changes in frontotemporal dementia. As such, this syndrome may provide important insight into the impact of neural network degeneration upon the innate ability to recognise emotions. This study used voxel-based morphometry to identify discrete neural correlates involved in the recognition of basic emotions (anger, disgust, fear, sadness, surprise and happiness) in frontotemporal dementia. Forty frontotemporal dementia patients (18 behavioural-variant, 11 semantic dementia, 11 progressive nonfluent aphasia) and 27 healthy controls were tested on two facial emotion recognition tasks: The Ekman 60 and Ekman Caricatures. Although each frontotemporal dementia group showed impaired recognition of negative emotions, distinct associations between emotion-specific task performance and changes in grey matter intensity emerged. Fear recognition was associated with the right amygdala; disgust recognition with the left insula; anger recognition with the left middle and superior temporal gyrus; and sadness recognition with the left subcallosal cingulate, indicating that discrete neural substrates are necessary for emotion recognition in frontotemporal dementia. The erosion of emotion-specific neural networks in neurodegenerative disorders may produce distinct profiles of performance that are relevant to understanding the neurobiological basis of emotion processing. PMID:23805313
The role of attention in emotional memory enhancement in pathological and healthy aging.
Sava, Alina-Alexandra; Paquet, Claire; Dumurgier, Julien; Hugon, Jacques; Chainay, Hanna
2016-01-01
After short delays between encoding and retrieval, healthy young participants have better memory performance for emotional stimuli than for neutral stimuli. Divided-attention paradigms suggest that this emotional enhancement of memory (EEM) is due to different attention mechanisms involved during encoding: automatic processing for negative stimuli, and controlled processing for positive stimuli. As far as we know, no study on the influence of these factors on EEM in Alzheimer's disease (AD) and mild cognitive impairment (MCI) patients, as compared to healthy young and older controls, has been conducted. Thus, the goal of our study was to ascertain whether the EEM in these populations depends on the attention resources available at encoding. Participants completed two encoding phases: full attention (FA) and divided attention (DA), followed by two retrieval phases (recognition tasks). There was no EEM on the discrimination accuracy, independently of group and encoding condition. Nevertheless, all participants used a more liberal response criterion for the negative and positive stimuli than for neutral ones. In AD patients, larger numbers of false recognitions for negative and positive stimuli than for neutral ones were observed after both encoding conditions. In MCI patients and in healthy older and younger controls this effect was observed only for negative stimuli, and it depended on the encoding condition. Thus, this effect was observed in young controls after both encoding conditions, in older controls after the DA encoding, and in MCI patients after the FA encoding. In conclusion, our results suggest that emotional valence does not always enhance discrimination accuracy. Nevertheless, in certain conditions related to the attention resources available at encoding, emotional valence, especially the negative one, enhances the subjective feeling of familiarity and, consequently, engenders changes in response bias. This effect seems to be sensitive to the age and the pathology of participants.
Labuschagne, Izelle; Jones, Rebecca; Callaghan, Jenny; Whitehead, Daisy; Dumas, Eve M; Say, Miranda J; Hart, Ellen P; Justo, Damian; Coleman, Allison; Dar Santos, Rachelle C; Frost, Chris; Craufurd, David; Tabrizi, Sarah J; Stout, Julie C
2013-05-15
Facial emotion recognition impairments have been reported in Huntington's disease (HD). However, the nature of the impairments across the spectrum of HD remains unclear. We report on emotion recognition data from 344 participants comprising premanifest HD (PreHD) and early HD patients, and controls. In a test of recognition of facial emotions, we examined responses to six basic emotional expressions and neutral expressions. In addition, and within the early HD sample, we tested for differences on emotion recognition performance between those 'on' vs. 'off' neuroleptic or selective serotonin reuptake inhibitor (SSRI) medications. The PreHD groups showed significant (p<0.05) impaired recognition, compared to controls, on fearful, angry and surprised faces; whereas the early HD groups were significantly impaired across all emotions including neutral expressions. In early HD, neuroleptic use was associated with worse facial emotion recognition, whereas SSRI use was associated with better facial emotion recognition. The findings suggest that emotion recognition impairments exist across the HD spectrum, but are relatively more widespread in manifest HD than in the premanifest period. Commonly prescribed medications to treat HD-related symptoms also appear to affect emotion recognition. These findings have important implications for interpersonal communication and medication usage in HD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Mineralocorticoid receptor haplotype, estradiol, progesterone and emotional information processing.
Hamstra, Danielle A; de Kloet, E Ronald; Quataert, Ina; Jansen, Myrthe; Van der Does, Willem
2017-02-01
Carriers of MR-haplotype 1 and 3 (GA/CG; rs5522 and rs2070951) are more sensitive to the influence of oral contraceptives (OC) and menstrual cycle phase on emotional information processing than MR-haplotype 2 (CA) carriers. We investigated whether this effect is associated with estradiol (E2) and/or progesterone (P4) levels. Healthy MR-genotyped premenopausal women were tested twice in a counterbalanced design. Naturally cycling (NC) women were tested in the early-follicular and mid-luteal phase and OC-users during OC-intake and in the pill-free week. At both sessions E2 and P4 were assessed in saliva. Tests included implicit and explicit positive and negative affect, attentional blink accuracy, emotional memory, emotion recognition, and risky decision-making (gambling). MR-haplotype 2 homozygotes had higher implicit happiness scores than MR-haplotype 2 heterozygotes (p=0.031) and MR-haplotype 1/3 carriers (p<0.001). MR-haplotype 2 homozygotes also had longer reaction times to happy faces in an emotion recognition test than MR-haplotype 1/3 (p=0.001). Practice effects were observed for most measures. The pattern of correlations between information processing and P4 or E2 differed between sessions, as well as the moderating effects of the MR genotype. In the first session the MR-genotype moderated the influence of P4 on implicit anxiety (sr=-0.30; p=0.005): higher P4 was associated with reduction in implicit anxiety, but only in MR-haplotype 2 homozygotes (sr=-0.61; p=0.012). In the second session the MR-genotype moderated the influence of E2 on the recognition of facial expressions of happiness (sr=-0.21; p=0.035): only in MR-haplotype 1/3 higher E2 was correlated with happiness recognition (sr=0.29; p=0.005). In the second session higher E2 and P4 were negatively correlated with accuracy in lag2 trials of the attentional blink task (p<0.001). Thus NC women, compared to OC-users, performed worse on lag 2 trials (p=0.041). The higher implicit happiness scores of MR-haplotype 2 homozygotes are in line with previous reports. Performance in the attentional blink task may be influenced by OC-use. The MR-genotype moderates the influence of E2 and P4 on emotional information processing. This moderating effect may depend on the novelty of the situation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rosenberg, Hannah; Dethier, Marie; Kessels, Roy P C; Westbrook, R Frederick; McDonald, Skye
2015-07-01
Traumatic brain injury (TBI) impairs emotion perception. Perception of negative emotions (sadness, disgust, fear, and anger) is reportedly affected more than positive (happiness and surprise) ones. It has been argued that this reflects a specialized neural network underpinning negative emotions that is vulnerable to brain injury. However, studies typically do not equate for differential difficulty between emotions. We aimed to examine whether emotion recognition deficits in people with TBI were specific to negative emotions, while equating task difficulty, and to determine whether perception deficits might be accounted for by other cognitive processes. Twenty-seven people with TBI and 28 matched control participants identified 6 basic emotions at 2 levels of intensity (a) the conventional 100% intensity and (b) "equated intensity"-that is, an intensity that yielded comparable accuracy rates across emotions in controls. (a) At 100% intensity, the TBI group was impaired in recognizing anger, fear, and disgust but not happiness, surprise, or sadness and performed worse on negative than positive emotions. (b) At equated intensity, the TBI group was poorer than controls overall but not differentially poorer in recognizing negative emotions. Although processing speed and nonverbal reasoning were associated with emotion accuracy, injury severity by itself was a unique predictor. When task difficulty is taken into account, individuals with TBI show impairment in recognizing all facial emotions. There was no evidence for a specific impairment for negative emotions or any particular emotion. Impairment was accounted for by injury severity rather than being a secondary effect of reduced neuropsychological functioning. (c) 2015 APA, all rights reserved).
Automatic recognition of emotions from facial expressions
NASA Astrophysics Data System (ADS)
Xue, Henry; Gertner, Izidor
2014-06-01
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.
Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M
2014-11-01
Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
Emotion recognition in frontotemporal dementia and Alzheimer's disease: A new film-based assessment.
Goodkind, Madeleine S; Sturm, Virginia E; Ascher, Elizabeth A; Shdo, Suzanne M; Miller, Bruce L; Rankin, Katherine P; Levenson, Robert W
2015-08-01
Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests. (c) 2015 APA, all rights reserved).
Melkonian, Alexander J; Ham, Lindsay S; Bridges, Ana J; Fugitt, Jessica L
2017-10-01
High rates of sexual victimization among college students necessitate further study of factors associated with sexual assault risk detection. The present study examined how social information processing relates to sexual assault risk detection as a function of sexual assault victimization history. 225 undergraduates (M age = 19.12, SD = 1.44; 66% women). Participants completed an online questionnaire assessing victimization history, an emotion identification task, and a sexual assault risk detection task between June 2013 and May 2014. Emotion identification moderated the association between victimization history and risk detection such that sexual assault survivors with lower emotion identification accuracy also reported the least risk in a sexual assault vignette. Findings suggest that differences in social information processing, specifically recognition of others' emotions, are associated with sexual assault risk detection. College prevention programs could incorporate emotional awareness strategies, particularly for men and women who are sexual assault survivors.
Non-monotonic relationships between emotional arousal and memory for color and location.
Boywitt, C Dennis
2015-01-01
Recent research points to the decreased diagnostic value of subjective retrieval experience for memory accuracy for emotional stimuli. While for neutral stimuli rich recollective experiences are associated with better context memory than merely familiar memories this association appears questionable for emotional stimuli. The present research tested the implicit assumption that the effect of emotional arousal on memory is monotonic, that is, steadily increasing (or decreasing) with increasing arousal. In two experiments emotional arousal was manipulated in three steps using emotional pictures and subjective retrieval experience as well as context memory were assessed. The results show an inverted U-shape relationship between arousal and recognition memory but for context memory and retrieval experience the relationship was more complex. For frame colour, context memory decreased linearly while for spatial location it followed the inverted U-shape function. The complex, non-monotonic relationships between arousal and memory are discussed as possible explanations for earlier divergent findings.
Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition
Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans
2015-01-01
The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602
Does cortisol modulate emotion recognition and empathy?
Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja
2016-04-01
Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Kun-Ching
2015-01-01
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590
Exploring Cultural Differences in the Recognition of the Self-Conscious Emotions.
Chung, Joanne M; Robins, Richard W
2015-01-01
Recent research suggests that the self-conscious emotions of embarrassment, shame, and pride have distinct, nonverbal expressions that can be recognized in the United States at above-chance levels. However, few studies have examined the recognition of these emotions in other cultures, and little research has been conducted in Asia. Consequently the cross-cultural generalizability of self-conscious emotions has not been firmly established. Additionally, there is no research that examines cultural variability in the recognition of the self-conscious emotions. Cultural values and exposure to Western culture have been identified as contributors to variability in recognition rates for the basic emotions; we sought to examine this for the self-conscious emotions using the University of California, Davis Set of Emotion Expressions (UCDSEE). The present research examined recognition of the self-conscious emotion expressions in South Korean college students and found that recognition rates were very high for pride, low but above chance for shame, and near zero for embarrassment. To examine what might be underlying the recognition rates we found in South Korea, recognition of self-conscious emotions and several cultural values were examined in a U.S. college student sample of European Americans, Asian Americans, and Asian-born individuals. Emotion recognition rates were generally similar between the European Americans and Asian Americans, and higher than emotion recognition rates for Asian-born individuals. These differences were not explained by cultural values in an interpretable manner, suggesting that exposure to Western culture is a more important mediator than values.
Exploring Cultural Differences in the Recognition of the Self-Conscious Emotions
Chung, Joanne M.; Robins, Richard W.
2015-01-01
Recent research suggests that the self-conscious emotions of embarrassment, shame, and pride have distinct, nonverbal expressions that can be recognized in the United States at above-chance levels. However, few studies have examined the recognition of these emotions in other cultures, and little research has been conducted in Asia. Consequently the cross-cultural generalizability of self-conscious emotions has not been firmly established. Additionally, there is no research that examines cultural variability in the recognition of the self-conscious emotions. Cultural values and exposure to Western culture have been identified as contributors to variability in recognition rates for the basic emotions; we sought to examine this for the self-conscious emotions using the University of California, Davis Set of Emotion Expressions (UCDSEE). The present research examined recognition of the self-conscious emotion expressions in South Korean college students and found that recognition rates were very high for pride, low but above chance for shame, and near zero for embarrassment. To examine what might be underlying the recognition rates we found in South Korea, recognition of self-conscious emotions and several cultural values were examined in a U.S. college student sample of European Americans, Asian Americans, and Asian-born individuals. Emotion recognition rates were generally similar between the European Americans and Asian Americans, and higher than emotion recognition rates for Asian-born individuals. These differences were not explained by cultural values in an interpretable manner, suggesting that exposure to Western culture is a more important mediator than values. PMID:26309215
Impact of emotionality on memory and meta-memory in schizophrenia using video sequences.
Peters, Maarten J V; Hauschildt, Marit; Moritz, Steffen; Jelinek, Lena
2013-03-01
A vast amount of memory and meta-memory research in schizophrenia shows that these patients perform worse on memory accuracy and hold false information with strong conviction compared to healthy controls. So far, studies investigating these effects mainly used traditional static stimulus material like word lists or pictures. The question remains whether these memory and meta-memory effects are also present in (1) more near-life dynamic situations (i.e., using standardized videos) and (2) whether emotionality has an influence on memory and meta-memory deficits (i.e., response confidence) in schizophrenia compared to healthy controls. Twenty-seven schizophrenia patients and 24 healthy controls were administered a newly developed emotional video paradigm with five videos differing in emotionality (positive, two negative, neutral, and delusional related). After each video, a recognition task required participants to make old-new discriminations along with confidence ratings, investigating memory accuracy and meta-memory deficits in more dynamic settings. For all but the positively valenced video, patients recognized fewer correct items compared to healthy controls, and did not differ with regard to the number of false memories for related items. In line with prior findings, schizophrenia patients showed more high-confident responses for misses and false memories for related items but displayed underconfidence for hits when compared to healthy controls, independent of emotionality. Limited sample size and control group; combined valence and arousal indicator for emotionality; general psychopathology indicator. Emotionality differentially moderated memory accuracy, biases in schizophrenia patients compared to controls. Moreover, the meta-memory deficits identified in static paradigms also manifest in more dynamic settings near-life settings and seem to be independent of emotionality. Copyright © 2012 Elsevier Ltd. All rights reserved.
Recognition of facial emotions in neuropsychiatric disorders.
Kohler, Christian G; Turner, Travis H; Gur, Raquel E; Gur, Ruben C
2004-04-01
Recognition of facial emotions represents an important aspect of interpersonal communication and is governed by select neural substrates. We present data on emotion recognition in healthy young adults utilizing a novel set of color photographs of evoked universal emotions. In addition, we review the recent literature on emotion recognition in psychiatric and neurologic disorders, and studies that compare different disorders.
Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality
Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque
2018-01-01
Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845
Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.
Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y
2018-02-01
Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.
[Impact of facial emotional recognition alterations in Dementia of the Alzheimer type].
Rubinstein, Wanda; Cossini, Florencia; Politis, Daniel
2016-07-01
Face recognition of basic emotions is independent of other deficits in dementia of the Alzheimer type. Among these deficits, there is disagreement about what emotions are more difficult to recognize. Our aim was to study the presence of alterations in the process of facial recognition of basic emotions, and to investigate if there were differences in the recognition of each type of emotion in Alzheimer's disease. With three tests of recognition of basic facial emotions we evaluated 29 patients who had been diagnosed with dementia of the Alzheimer type and 18 control subjects. Significant differences were obtained in tests of recognition of basic facial emotions and between each. Since the amygdala, one of the brain structures responsible for emotional reaction, is affected in the early stages of this disease, our findings become relevant to understand how this alteration of the process of emotional recognition impacts the difficulties these patients have in both interpersonal relations and behavioral disorders.
Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.
Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi
2012-12-01
We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Age-related differences in emotion recognition ability: a cross-sectional study.
Mill, Aire; Allik, Jüri; Realo, Anu; Valk, Raivo
2009-10-01
Experimental studies indicate that recognition of emotions, particularly negative emotions, decreases with age. However, there is no consensus at which age the decrease in emotion recognition begins, how selective this is to negative emotions, and whether this applies to both facial and vocal expression. In the current cross-sectional study, 607 participants ranging in age from 18 to 84 years (mean age = 32.6 +/- 14.9 years) were asked to recognize emotions expressed either facially or vocally. In general, older participants were found to be less accurate at recognizing emotions, with the most distinctive age difference pertaining to a certain group of negative emotions. Both modalities revealed an age-related decline in the recognition of sadness and -- to a lesser degree -- anger, starting at about 30 years of age. Although age-related differences in the recognition of expression of emotion were not mediated by personality traits, 2 of the Big 5 traits, openness and conscientiousness, made an independent contribution to emotion-recognition performance. Implications of age-related differences in facial and vocal emotion expression and early onset of the selective decrease in emotion recognition are discussed in terms of previous findings and relevant theoretical models.
Reyes, B Nicole; Segal, Shira C; Moulson, Margaret C
2018-01-01
Emotion recognition is important for social interaction and communication, yet previous research has identified a cross-cultural emotion recognition deficit: Recognition is less accurate for emotions expressed by individuals from a cultural group different than one's own. The current study examined whether social categorization based on race, in the absence of cultural differences, influences emotion recognition in a diverse context. South Asian and White Canadians in the Greater Toronto Area completed an emotion recognition task that required them to identify the seven basic emotional expressions when posed by members of the same two groups, allowing us to tease apart the contributions of culture and social group membership. Contrary to our hypothesis, there was no mutual in-group advantage in emotion recognition: Participants were not more accurate at recognizing emotions posed by their respective racial in-groups. Both groups were more accurate at recognizing expressions when posed by South Asian faces, and White participants were more accurate overall compared to South Asian participants. These results suggest that in a diverse environment, categorization based on race alone does not lead to the creation of social out-groups in a way that negatively impacts emotion recognition.
An investigation of the effect of race-based social categorization on adults’ recognition of emotion
Reyes, B. Nicole; Segal, Shira C.
2018-01-01
Emotion recognition is important for social interaction and communication, yet previous research has identified a cross-cultural emotion recognition deficit: Recognition is less accurate for emotions expressed by individuals from a cultural group different than one’s own. The current study examined whether social categorization based on race, in the absence of cultural differences, influences emotion recognition in a diverse context. South Asian and White Canadians in the Greater Toronto Area completed an emotion recognition task that required them to identify the seven basic emotional expressions when posed by members of the same two groups, allowing us to tease apart the contributions of culture and social group membership. Contrary to our hypothesis, there was no mutual in-group advantage in emotion recognition: Participants were not more accurate at recognizing emotions posed by their respective racial in-groups. Both groups were more accurate at recognizing expressions when posed by South Asian faces, and White participants were more accurate overall compared to South Asian participants. These results suggest that in a diverse environment, categorization based on race alone does not lead to the creation of social out-groups in a way that negatively impacts emotion recognition. PMID:29474367
A multimodal approach to emotion recognition ability in autism spectrum disorders.
Jones, Catherine R G; Pickles, Andrew; Falcaro, Milena; Marsden, Anita J S; Happé, Francesca; Scott, Sophie K; Sauter, Disa; Tregay, Jenifer; Phillips, Rebecca J; Baird, Gillian; Simonoff, Emily; Charman, Tony
2011-03-01
Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal, hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (≥ 80 vs. < 80). We found no evidence of a fundamental emotion recognition deficit in the ASD group and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD. © 2010 The Authors. Journal of Child Psychology and Psychiatry © 2010 Association for Child and Adolescent Mental Health.
Berzenski, Sara R; Yates, Tuppett M
2017-10-01
The ability to recognize and label emotions serves as a building block by which children make sense of the world and learn how to interact with social partners. However, the timing and salience of influences on emotion recognition development are not fully understood. Path analyses evaluated the contributions of parenting and child narrative coherence to the development of emotion recognition across ages 4 through 8 in a diverse (50% female; 46% Hispanic, 18.4% Black, 11.2% White, .4% Asian, 24.0% multiracial) longitudinally followed sample of 250 caregiver-child dyads. Parenting behaviors during interactions (i.e., support, instructional quality, intrusiveness, and hostility) and children's narrative coherence during the MacArthur Story Stem Battery were observed at ages 4 and 6. Emotion recognition increased from age 4 to 8. Parents' supportive presence at age 4 and instructional quality at age 6 predicted increased emotion recognition at 8, beyond initial levels of emotion recognition and child cognitive ability. There were no significant effects of negative parenting (i.e., intrusiveness or hostility) at 4 or 6 on emotion recognition. Child narrative coherence at ages 4 and 6 predicted increased emotion recognition at 8. Emotion recognition at age 4 predicted increased parent instructional quality and decreased intrusiveness at 6. These findings clarify whether and when familial and child factors influence emotion recognition development. Influences on emotion recognition development emerged as differentially salient across time periods, such that there is a need to develop and implement targeted interventions to promote positive parenting skills and children's narrative coherence at specific ages. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Stereotypes and prejudice affect the recognition of emotional body postures.
Bijlstra, Gijsbert; Holland, Rob W; Dotsch, Ron; Wigboldus, Daniel H J
2018-03-26
Most research on emotion recognition focuses on facial expressions. However, people communicate emotional information through bodily cues as well. Prior research on facial expressions has demonstrated that emotion recognition is modulated by top-down processes. Here, we tested whether this top-down modulation generalizes to the recognition of emotions from body postures. We report three studies demonstrating that stereotypes and prejudice about men and women may affect how fast people classify various emotional body postures. Our results suggest that gender cues activate gender associations, which affect the recognition of emotions from body postures in a top-down fashion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Farsham, Aida; Abbaslou, Tahereh; Bidaki, Reza; Bozorg, Bonnie
2017-01-01
Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD) and schizotypal personality disorder (SPD). The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method: Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants. Discussion: The results of one way ANOVA and Scheffe’s post hoc test analysis revealed significant differences in neuropsychology assessment of facial emotional recognition between BPD and SPD patients with normal group (p = 0/001). A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008). A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(. The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain. PMID:28659980
Fenske, Sabrina; Lis, Stefanie; Liebke, Lisa; Niedtfeld, Inga; Kirsch, Peter; Mier, Daniela
2015-01-01
Borderline Personality Disorder (BPD) is characterized by severe deficits in social interactions, which might be linked to deficits in emotion recognition. Research on emotion recognition abilities in BPD revealed heterogeneous results, ranging from deficits to heightened sensitivity. The most stable findings point to an impairment in the evaluation of neutral facial expressions as neutral, as well as to a negative bias in emotion recognition; that is the tendency to attribute negative emotions to neutral expressions, or in a broader sense to report a more negative emotion category than depicted. However, it remains unclear which contextual factors influence the occurrence of this negative bias. Previous studies suggest that priming by preceding emotional information and also constrained processing time might augment the emotion recognition deficit in BPD. To test these assumptions, 32 female BPD patients and 31 healthy females, matched for age and education, participated in an emotion recognition study, in which every facial expression was preceded by either a positive, neutral or negative scene. Furthermore, time constraints for processing were varied by presenting the facial expressions with short (100 ms) or long duration (up to 3000 ms) in two separate blocks. BPD patients showed a significant deficit in emotion recognition for neutral and positive facial expression, associated with a significant negative bias. In BPD patients, this emotion recognition deficit was differentially affected by preceding emotional information and time constraints, with a greater influence of emotional information during long face presentations and a greater influence of neutral information during short face presentations. Our results are in line with previous findings supporting the existence of a negative bias in emotion recognition in BPD patients, and provide further insights into biased social perceptions in BPD patients.
Socioemotional deficits associated with obsessive-compulsive symptomatology.
Grisham, Jessica R; Henry, Julie D; Williams, Alishia D; Bailey, Phoebe E
2010-02-28
Increasing emphasis has been placed on the role of socioemotional functioning in models of obsessive-compulsive disorder (OCD). The present study investigated whether OCD symptoms were associated with capacity for theory of mind (ToM) and basic affect recognition. Non-clinical volunteers (N=204) completed self report measures of OCD and general psychopathology, in addition to behavioral measures of ToM and affect recognition. The results indicated that higher OCD symptoms were associated with reduced ToM, as well as reduced accuracy decoding the specific emotion of disgust. Importantly, these relationships could not be attributed to other, more general features of psychopathology. The findings of the current study therefore further our understanding of how the processing and interpretation of social and emotional information is affected in the context of OCD symptomatology, and are discussed in relation to neuropsychological models of OCD. 2009 Elsevier Ireland Ltd. All rights reserved.
The effect of mild acute stress during memory consolidation on emotional recognition memory.
Corbett, Brittany; Weinberg, Lisa; Duarte, Audrey
2017-11-01
Stress during consolidation improves recognition memory performance. Generally, this memory benefit is greater for emotionally arousing stimuli than neutral stimuli. The strength of the stressor also plays a role in memory performance, with memory performance improving up to a moderate level of stress and thereafter worsening. As our daily stressors are generally minimal in strength, we chose to induce mild acute stress to determine its effect on memory performance. In the current study, we investigated if mild acute stress during consolidation improves memory performance for emotionally arousing images. To investigate this, we had participants encode highly arousing negative, minimally arousing negative, and neutral images. We induced stress using the Montreal Imaging Stress Task (MIST) in half of the participants and a control task to the other half of the participants directly after encoding (i.e. during consolidation) and tested recognition 48h later. We found no difference in memory performance between the stress and control group. We found a graded pattern among confidence, with responders in the stress group having the least amount of confidence in their hits and controls having the most. Across groups, we found highly arousing negative images were better remembered than minimally arousing negative or neutral images. Although stress did not affect memory accuracy, responders, as defined by cortisol reactivity, were less confident in their decisions. Our results suggest that the daily stressors humans experience, regardless of their emotional affect, do not have adverse effects on memory. Copyright © 2017 Elsevier Inc. All rights reserved.
Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R
2011-11-01
Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.
Nardelli, M; Greco, A; Valenza, G; Lanata, A; Bailon, R; Scilingo, E P
2017-07-01
This paper reports on a novel method for the analysis of Heart Rate Variability (HRV) through Lagged Poincaré Plot (LPP) theory. Specifically a hybrid method, LPP symb , including LPP quantifiers and related symbolic dynamics was proposed. LPP has been applied to investigate the autonomic response to pleasant and unpleasant pictures extracted from the International Affective Picture System (IAPS). IAPS pictures are standardized in terms of level of arousal, i.e. the intensity of the evoked emotion, and valence, i.e. the level of pleasantness/unpleasantness, according to the Circumplex model of Affects (CMA). Twenty-two healthy subjects were enrolled in the experiment, which comprised four sessions with increasing arousal level. Within each session valence increased from positive to negative. An ad-hoc pattern recognition algorithm using a Leave-One-Subject-Out (LOSO) procedure based on a Quadratic Discriminant Classifier (QDC) was implemented. Our pattern recognition system was able to classify pleasant and unpleasant sessions with an accuracy of 71.59%. Therefore, we can suggest the use of the LPP symb for emotion recognition.
Leist, Tatyana; Dadds, Mark R
2009-04-01
Emotional processing styles appear to characterize various forms of psychopathology and environmental adversity in children. For example, autistic, anxious, high- and low-emotion conduct problem children, and children who have been maltreated, all appear to show specific deficits and strengths in recognizing the facial expressions of emotions. Until now, the relationships between emotion recognition, antisocial behaviour, emotional problems, callous-unemotional (CU) traits and early maltreatment have never been assessed simultaneously in one study, and the specific associations of emotion recognition to maltreatment and child characteristics are therefore unknown. We examined facial-emotion processing in a sample of 23 adolescents selected for high-risk status on the variables of interest. As expected, maltreatment and child characteristics showed unique associations. CU traits were uniquely related to impairments in fear recognition. Antisocial behaviour was uniquely associated with better fear recognition, but impaired anger recognition. Emotional problems were associated with better recognition of anger and sadness, but lower recognition of neutral faces. Maltreatment was predictive of superior recognition of fear and sadness. The findings are considered in terms of social information-processing theories of psychopathology. Implications for clinical interventions are discussed.
Luebbe, Aaron M; Fussner, Lauren M; Kiel, Elizabeth J; Early, Martha C; Bell, Debora J
2013-12-01
Depressive symptomatology is associated with impaired recognition of emotion. Previous investigations have predominantly focused on emotion recognition of static facial expressions neglecting the influence of social interaction and critical contextual factors. In the current study, we investigated how youth and maternal symptoms of depression may be associated with emotion recognition biases during familial interactions across distinct contextual settings. Further, we explored if an individual's current emotional state may account for youth and maternal emotion recognition biases. Mother-adolescent dyads (N = 128) completed measures of depressive symptomatology and participated in three family interactions, each designed to elicit distinct emotions. Mothers and youth completed state affect ratings pertaining to self and other at the conclusion of each interaction task. Using multiple regression, depressive symptoms in both mothers and adolescents were associated with biased recognition of both positive affect (i.e., happy, excited) and negative affect (i.e., sadness, anger, frustration); however, this bias emerged primarily in contexts with a less strong emotional signal. Using actor-partner interdependence models, results suggested that youth's own state affect accounted for depression-related biases in their recognition of maternal affect. State affect did not function similarly in explaining depression-related biases for maternal recognition of adolescent emotion. Together these findings suggest a similar negative bias in emotion recognition associated with depressive symptoms in both adolescents and mothers in real-life situations, albeit potentially driven by different mechanisms.
Improving Negative Emotion Recognition in Young Offenders Reduces Subsequent Crime
Hubble, Kelly; Bowen, Katharine L.; Moore, Simon C.; van Goozen, Stephanie H. M.
2015-01-01
Background Children with antisocial behaviour show deficits in the perception of emotional expressions in others that may contribute to the development and persistence of antisocial and aggressive behaviour. Current treatments for antisocial youngsters are limited in effectiveness. It has been argued that more attention should be devoted to interventions that target neuropsychological correlates of antisocial behaviour. This study examined the effect of emotion recognition training on criminal behaviour. Methods Emotion recognition and crime levels were studied in 50 juvenile offenders. Whilst all young offenders received their statutory interventions as the study was conducted, a subgroup of twenty-four offenders also took part in a facial affect training aimed at improving emotion recognition. Offenders in the training and control groups were matched for age, SES, IQ and lifetime crime level. All offenders were tested twice for emotion recognition performance, and recent crime data were collected after the testing had been completed. Results Before the training there were no differences between the groups in emotion recognition, with both groups displaying poor fear, sadness and anger recognition. After the training fear, sadness and anger recognition improved significantly in juvenile offenders in the training group. Although crime rates dropped in all offenders in the 6 months following emotion testing, only the group of offenders who had received the emotion training showed a significant reduction in the severity of the crimes they committed. Conclusions The study indicates that emotion recognition can be relatively easily improved in youths who engage in serious antisocial and criminal behavior. The results suggest that improved emotion recognition has the potential to reduce the severity of reoffending. PMID:26121148
Improving Negative Emotion Recognition in Young Offenders Reduces Subsequent Crime.
Hubble, Kelly; Bowen, Katharine L; Moore, Simon C; van Goozen, Stephanie H M
2015-01-01
Children with antisocial behaviour show deficits in the perception of emotional expressions in others that may contribute to the development and persistence of antisocial and aggressive behaviour. Current treatments for antisocial youngsters are limited in effectiveness. It has been argued that more attention should be devoted to interventions that target neuropsychological correlates of antisocial behaviour. This study examined the effect of emotion recognition training on criminal behaviour. Emotion recognition and crime levels were studied in 50 juvenile offenders. Whilst all young offenders received their statutory interventions as the study was conducted, a subgroup of twenty-four offenders also took part in a facial affect training aimed at improving emotion recognition. Offenders in the training and control groups were matched for age, SES, IQ and lifetime crime level. All offenders were tested twice for emotion recognition performance, and recent crime data were collected after the testing had been completed. Before the training there were no differences between the groups in emotion recognition, with both groups displaying poor fear, sadness and anger recognition. After the training fear, sadness and anger recognition improved significantly in juvenile offenders in the training group. Although crime rates dropped in all offenders in the 6 months following emotion testing, only the group of offenders who had received the emotion training showed a significant reduction in the severity of the crimes they committed. The study indicates that emotion recognition can be relatively easily improved in youths who engage in serious antisocial and criminal behavior. The results suggest that improved emotion recognition has the potential to reduce the severity of reoffending.
Impaired recognition of happy facial expressions in bipolar disorder.
Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M
2014-08-01
The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.
Emotion recognition pattern in adolescent boys with attention-deficit/hyperactivity disorder.
Aspan, Nikoletta; Bozsik, Csilla; Gadoros, Julia; Nagy, Peter; Inantsy-Pap, Judit; Vida, Peter; Halasz, Jozsef
2014-01-01
Social and emotional deficits were recently considered as inherent features of individuals with attention-deficit hyperactivity disorder (ADHD), but only sporadic literature data exist on emotion recognition in adolescents with ADHD. The aim of the present study was to establish emotion recognition profile in adolescent boys with ADHD in comparison with control adolescents. Forty-four adolescent boys (13-16 years) participated in the study after informed consent; 22 boys had a clinical diagnosis of ADHD, while data were also assessed from 22 adolescent control boys matched for age and Raven IQ. Parent- and self-reported behavioral characteristics were assessed by the means of the Strengths and Difficulties Questionnaire. The recognition of six basic emotions was evaluated by the "Facial Expressions of Emotion-Stimuli and Tests." Compared to controls, adolescents with ADHD were more sensitive in the recognition of disgust and, worse in the recognition of fear and showed a tendency for impaired recognition of sadness. Hyperactivity measures showed an inverse correlation with fear recognition. Our data suggest that adolescent boys with ADHD have alterations in the recognition of specific emotions.
Thonse, Umesh; Behere, Rishikesh V; Praharaj, Samir Kumar; Sharma, Podila Sathya Venkata Narasimha
2018-06-01
Facial emotion recognition deficits have been consistently demonstrated in patients with severe mental disorders. Expressed emotion is found to be an important predictor of relapse. However, the relationship between facial emotion recognition abilities and expressed emotions and its influence on socio-occupational functioning in schizophrenia versus bipolar disorder has not been studied. In this study we examined 91 patients with schizophrenia and 71 with bipolar disorder for psychopathology, socio occupational functioning and emotion recognition abilities. Primary caregivers of 62 patients with schizophrenia and 49 with bipolar disorder were assessed on Family Attitude Questionnaire to assess their expressed emotions. Patients of schizophrenia and bipolar disorder performed similarly on the emotion recognition task. Patients with schizophrenia group experienced higher critical comments and had a poorer socio-occupational functioning as compared to patients with bipolar disorder. Poorer socio-occupational functioning in patients with schizophrenia was significantly associated with greater dissatisfaction in their caregivers. In patients with bipolar disorder, poorer emotion recognition scores significantly correlated with poorer adaptive living skills and greater hostility and dissatisfaction in their caregivers. The findings of our study suggest that emotion recognition abilities in patients with bipolar disorder are associated with negative expressed emotions leading to problems in adaptive living skills. Copyright © 2018 Elsevier B.V. All rights reserved.
Social approach and emotion recognition in fragile X syndrome.
Williams, Tracey A; Porter, Melanie A; Langdon, Robyn
2014-03-01
Evidence is emerging that individuals with Fragile X syndrome (FXS) display emotion recognition deficits, which may contribute to their significant social difficulties. The current study investigated the emotion recognition abilities, and social approachability judgments, of FXS individuals when processing emotional stimuli. Relative to chronological age- (CA-) and mental age- (MA-) matched controls, the FXS group performed significantly more poorly on the emotion recognition tasks, and displayed a bias towards detecting negative emotions. Moreover, after controlling for emotion recognition deficits, the FXS group displayed significantly reduced ratings of social approachability. These findings suggest that a social anxiety pattern, rather than poor socioemotional processing, may best explain the social avoidance observed in FXS.
Comparison of emotion recognition from facial expression and music.
Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija
2011-01-01
The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.
Interpersonal beliefs related to suicide and facial emotion processing in psychotic disorders.
Villa, Jennifer; Pinkham, Amy E; Kaufmann, Christopher N; Granholm, Eric; Harvey, Philip D; Depp, Colin A
2018-05-01
Deficits in social cognition are present in psychotic disorders; moreover, maladaptive interpersonal beliefs have been posited to underlie risk of suicidal ideation and behavior. However, the association between social cognition and negative appraisals as potential risk factors for suicidal ideation and behavior in psychotic disorders has not been assessed. In a pilot study, we assessed accuracy and error biases in facial emotion recognition (Penn ER-40), maladaptive interpersonal beliefs as measured by the Interpersonal Needs Questionnaire (INQ), and current suicide ideation and history of past attempts in a sample of 101 outpatients with psychotic disorders (75 schizophrenia/schizoaffective; 26 bipolar disorder). INQ scores were positively associated with history of suicide attempts and current ideation. INQ scores were inversely related with emotion recognition accuracy yet positively correlated with bias toward perceiving anger in neutral expressions. The association between biases pertaining to anger and INQ scores persisted after adjusting for global cognitive ability and were more evident in schizophrenia than in bipolar disorder. The present findings suggest that maladaptive beliefs are associated with a tendency to misperceive neutral stimuli as threatening and are associated with suicidal ideation and behavior. Although better cognitive ability is associated with higher rates of suicide attempts in psychotic disorders, biases in misinterpreting anger in others may be a specific deficit related to formation of maladaptive beliefs about others, which, in turn, are associated with history of suicide attempts. Copyright © 2018. Published by Elsevier Ltd.
Younger and Older Users’ Recognition of Virtual Agent Facial Expressions
Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.
2015-01-01
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible explanation for age-related differences in emotion recognition. First, our findings show age-related differences in the recognition of emotions expressed by a virtual agent, with older adults showing lower recognition for the emotions of anger, disgust, fear, happiness, sadness, and neutral. These age-related difference might be explained by older adults having difficulty discriminating similarity in configural arrangement of facial features for certain emotions; for example, older adults often mislabeled the similar emotions of fear as surprise. Second, our results did not provide evidence for the dynamic formation improving emotion recognition; but, in general, the intensity of the emotion improved recognition. Lastly, we learned that emotion recognition, for older and younger adults, differed by character type, from best to worst: human, synthetic human, and then iCat. Our findings provide guidance for design, as well as the development of a framework of age-related differences in emotion recognition. PMID:25705105
Miskowiak, K W; Larsen, J E; Harmer, C J; Siebner, H R; Kessing, L V; Macoveanu, J; Vinberg, M
2018-01-15
Negative cognitive bias and aberrant neural processing of self-referent emotional words seem to be trait-marks of depression. However, it is unclear whether these neurocognitive changes are present in unaffected first-degree relatives and constitute an illness endophenotype. Fifty-three healthy, never-depressed monozygotic or dizygotic twins with a co-twin history of depression (high-risk group: n = 26) or no first-degree family history of depression (low-risk group: n = 27) underwent neurocognitive testing and functional magnetic imaging (fMRI) as part of a follow-up cohort study. Participants performed a self-referent emotional word categorisation task and free word recall task followed by a recognition task during fMRI. Participants also completed questionnaires assessing mood, personality traits and coping strategies. High-risk and low-risk twins (age, mean ± SD: 40 ± 11) were well-balanced for demographic variables, mood, coping and neuroticism. High-risk twins showed lower accuracy during self-referent categorisation of emotional words independent of valence and more false recollections of negative words than low-risk twins during free recall. Functional MRI yielded no differences between high-risk and low-risk twins in retrieval-specific neural activity for positive or negative words or during the recognition of negative versus positive words within the hippocampus or prefrontal cortex. The subtle display of negative recall bias is consistent with the hypothesis that self-referent negative memory bias is an endophenotype for depression. High-risk twins' lower categorisation accuracy adds to the evidence for valence-independent cognitive deficits in individuals at familial risk for depression. Copyright © 2017 Elsevier B.V. All rights reserved.
Automatic decoding of facial movements reveals deceptive pain expressions
Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang
2014-01-01
Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830
Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long
2013-10-30
This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle
2017-01-01
To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.
Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L
2017-03-01
Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).
Halász, József; Áspán, Nikoletta; Bozsik, Csilla; Gádoros, Júlia; Inántsy-Pap, Judit
2013-01-01
In adult individuals with antisocial personality disorder, impairment in the recognition of fear seems established. In adolescents with conduct disorder (antecedent of antisocial personality disorder), only sporadic data were assessed, but literature data indicate alterations in the recognition of emotions. The aim of the present study was to assess the relationship between emotion recognition and conduct symptoms in non-clinical adolescents. 53 adolescents participated in the study (13-16 years, boys, n=29, age 14.7±0.2 years; girls, n=24, age=14.7±0.2 years) after informed consent. The parent version of the Strengths and Difficulties Questionnaire was used to assess behavioral problems. The recognition of six basic emotions was established by the "Facial expressions of emotion-stimuli and tests", while Raven IQ measures were also performed. Compared to boys, girls showed significantly better performance in the recognition of disgust (p<0.035), while no significant difference occurred in the recognition of other emotions. In boys, Conduct Problems score was inversely correlated with the recognition of fear (Spearman R=-0.40, p<0.031) and overall emotion recognition (Spearman R=-0.44, p<0.015), while similar correlation was not present in girls. The relationship between the recognition of emotions and conduct problems might indicate an important mechanism in the development of antisocial behavior.
Survey of Technologies for the Airport Border of the Future
2014-04-01
geometry Handwriting recognition ID cards Image classification Image enhancement Image fusion Image matching Image processing Image segmentation Iris...00 Tongue print Footstep recognition Odour recognition Retinal recognition Emotion recognition Periocular recognition Handwriting recognition Ear...recognition Palmprint recognition Hand geometry DNA matching Vein matching Ear recognition Handwriting recognition Periocular recognition Emotion
Neves, Maila de Castro Lourenço das; Tremeau, Fabien; Nicolato, Rodrigo; Lauar, Hélio; Romano-Silva, Marco Aurélio; Correa, Humberto
2011-09-01
A large body of evidence suggests that several aspects of face processing are impaired in autism and that this impairment might be hereditary. This study was aimed at assessing facial emotion recognition in parents of children with autism and its associations with a functional polymorphism of the serotonin transporter (5HTTLPR). We evaluated 40 parents of children with autism and 41 healthy controls. All participants were administered the Penn Emotion Recognition Test (ER40) and were genotyped for 5HTTLPR. Our study showed that parents of children with autism performed worse in the facial emotion recognition test than controls. Analyses of error patterns showed that parents of children with autism over-attributed neutral to emotional faces. We found evidence that 5HTTLPR polymorphism did not influence the performance in the Penn Emotion Recognition Test, but that it may determine different error patterns. Facial emotion recognition deficits are more common in first-degree relatives of autistic patients than in the general population, suggesting that facial emotion recognition is a candidate endophenotype for autism.
Bora, Emre; Velakoulis, Dennis; Walterfang, Mark
2016-07-01
Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD). The aim of this meta-analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta-analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d = 1.81) and AD (d = 1.23). In bvFTD, recognition of negative emotions, especially anger (d = 1.48) and disgust (d = 1.41), were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social-cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD. © The Author(s) 2016.
Percinel, Ipek; Ozbaran, Burcu; Kose, Sezen; Simsek, Damla Goksen; Darcan, Sukran
2018-03-01
In this study we aimed to evaluate emotion recognition and emotion regulation skills of children with exogenous obesity between the ages of 11 and 18 years and compare them with healthy controls. The Schedule for Affective Disorders and Schizophrenia for School Aged Children was used for psychiatric evaluations. Emotion recognition skills were evaluated using Faces Test and Reading the Mind in the Eyes Test. The Difficulties in Emotions Regulation Scale was used for evaluating skills of emotion regulation. Children with obesity had lower scores on Faces Test and Reading the Mind in the Eyes Test, and experienced greater difficulty in emotional regulation skills. Improved understanding of emotional recognition and emotion regulation in young people with obesity may improve their social adaptation and help in the treatment of their disorder. To the best of our knowledge, this is the first study to evaluate both emotional recognition and emotion regulation functions in obese children and obese adolescents between 11 and 18 years of age.
Food-Induced Emotional Resonance Improves Emotion Recognition.
Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia
2016-01-01
The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce-which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one.
Food-Induced Emotional Resonance Improves Emotion Recognition
Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia
2016-01-01
The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce—which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one. PMID:27973559
Barona, Manuela; Kothari, Radha; Skuse, David; Micali, Nadia
2015-01-01
Recent research investigating the extreme male brain theory of autism spectrum disorders (ASD) has drawn attention to the possibility that autistic type social difficulties may be associated with high prenatal testosterone exposure. This study aims to investigate the association between social communication and emotion recognition difficulties and second to fourth digit ratio (2D:4D) and circulating maternal testosterone during pregnancy in a large community-based cohort: the Avon Longitudinal Study of Parents and Children (ALSPAC). A secondary aim is to investigate possible gender differences in the associations. Data on social communication (Social and Communication Disorders Checklist, N = 7165), emotion recognition (emotional triangles, N = 5844 and diagnostics analysis of non-verbal accuracy, N = 7488) and 2D:4D (second to fourth digit ratio, N = 7159) were collected in childhood and early adolescence from questionnaires and face-to-face assessments. Complete data was available on 3515 children. Maternal circulating testosterone during pregnancy was available in a subsample of 89 children. Males had lower 2D:4D ratios than females [t (3513) = -9.775, p < 0.001]. An association was found between measures of social communication and emotion recognition, and the lowest 10 % of 2D:4D ratios. A significant association was found between maternal circulating testosterone and left hand 2D:4D [OR = 1.65, 95 % CI 1.1-2.4, p < 0.01]. Previous findings on the association between 2D:4D and social communication difficulties were not confirmed. A novel association between an extreme measure of 2D:4D in males suggests threshold effects and warrants replication.
Ruocco, Anthony C.; Reilly, James L.; Rubin, Leah H.; Daros, Alex R.; Gershon, Elliot S.; Tamminga, Carol A.; Pearlson, Godfrey D.; Hill, S. Kristian; Keshavan, Matcheri S.; Gur, Ruben C.; Sweeney, John A.
2014-01-01
Background Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. Objective The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Method Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Results Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Conclusions Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. PMID:25052782
Ruocco, Anthony C; Reilly, James L; Rubin, Leah H; Daros, Alex R; Gershon, Elliot S; Tamminga, Carol A; Pearlson, Godfrey D; Hill, S Kristian; Keshavan, Matcheri S; Gur, Ruben C; Sweeney, John A
2014-09-01
Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk
2017-07-01
This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.
Emotion Recognition Pattern in Adolescent Boys with Attention-Deficit/Hyperactivity Disorder
Bozsik, Csilla; Gadoros, Julia; Inantsy-Pap, Judit
2014-01-01
Background. Social and emotional deficits were recently considered as inherent features of individuals with attention-deficit hyperactivity disorder (ADHD), but only sporadic literature data exist on emotion recognition in adolescents with ADHD. The aim of the present study was to establish emotion recognition profile in adolescent boys with ADHD in comparison with control adolescents. Methods. Forty-four adolescent boys (13–16 years) participated in the study after informed consent; 22 boys had a clinical diagnosis of ADHD, while data were also assessed from 22 adolescent control boys matched for age and Raven IQ. Parent- and self-reported behavioral characteristics were assessed by the means of the Strengths and Difficulties Questionnaire. The recognition of six basic emotions was evaluated by the “Facial Expressions of Emotion-Stimuli and Tests.” Results. Compared to controls, adolescents with ADHD were more sensitive in the recognition of disgust and, worse in the recognition of fear and showed a tendency for impaired recognition of sadness. Hyperactivity measures showed an inverse correlation with fear recognition. Conclusion. Our data suggest that adolescent boys with ADHD have alterations in the recognition of specific emotions. PMID:25110694
Lin, Chia-Yao; Tien, Yi-Min; Huang, Jong-Tsun; Tsai, Chon-Haw; Hsu, Li-Chuan
2016-01-01
Because of dopaminergic neurodegeneration, patients with Parkinson's disease (PD) show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment 1 and identify gender in Experiment 2. In Experiment 1, PD patients demonstrated a recognition deficit for negative (sadness and anger) and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment 2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment 2 as alternative explanations for the results of Experiment 1. We concluded that patients' ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions.
Tien, Yi-Min; Huang, Jong-Tsun
2016-01-01
Because of dopaminergic neurodegeneration, patients with Parkinson's disease (PD) show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment 1 and identify gender in Experiment 2. In Experiment 1, PD patients demonstrated a recognition deficit for negative (sadness and anger) and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment 2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment 2 as alternative explanations for the results of Experiment 1. We concluded that patients' ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions. PMID:27555668
Parents’ Emotion-Related Beliefs, Behaviors, and Skills Predict Children's Recognition of Emotion
Castro, Vanessa L.; Halberstadt, Amy G.; Lozada, Fantasy T.; Craig, Ashley B.
2015-01-01
Children who are able to recognize others’ emotions are successful in a variety of socioemotional domains, yet we know little about how school-aged children's abilities develop, particularly in the family context. We hypothesized that children develop emotion recognition skill as a function of parents’ own emotion-related beliefs, behaviors, and skills. We examined parents’ beliefs about the value of emotion and guidance of children's emotion, parents’ emotion labeling and teaching behaviors, and parents’ skill in recognizing children's emotions in relation to their school-aged children's emotion recognition skills. Sixty-nine parent-child dyads completed questionnaires, participated in dyadic laboratory tasks, and identified their own emotions and emotions felt by the other participant from videotaped segments. Regression analyses indicate that parents’ beliefs, behaviors, and skills together account for 37% of the variance in child emotion recognition ability, even after controlling for parent and child expressive clarity. The findings suggest the importance of the family milieu in the development of children's emotion recognition skill in middle childhood, and add to accumulating evidence suggesting important age-related shifts in the relation between parental emotion socialization and child emotional development. PMID:26005393
Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine
2013-02-01
Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. Copyright © 2011 Elsevier Ltd. All rights reserved.
Social Approach and Emotion Recognition in Fragile X Syndrome
ERIC Educational Resources Information Center
Williams, Tracey A.; Porter, Melanie A.; Langdon, Robyn
2014-01-01
Evidence is emerging that individuals with Fragile X syndrome (FXS) display emotion recognition deficits, which may contribute to their significant social difficulties. The current study investigated the emotion recognition abilities, and social approachability judgments, of FXS individuals when processing emotional stimuli. Relative to…
Gender interactions in the recognition of emotions and conduct symptoms in adolescents.
Halász, József; Aspán, Nikoletta; Bozsik, Csilla; Gádoros, Júlia; Inántsy-Pap, Judit
2014-01-01
According to literature data, impairment in the recognition of emotions might be related to antisocial developmental pathway. In the present study, the relationship between gender-specific interaction of emotion recognition and conduct symptoms were studied in non-clinical adolescents. After informed consent, 29 boys and 24 girls (13-16 years, 14 ± 0.1 years) participated in the study. The parent version of the Strengths and Difficulties Questionnaire was used to assess behavioral problems. The recognition of basic emotions was analyzed according to both the gender of the participants and the gender of the stimulus faces via the "Facial Expressions of Emotion- Stimuli and Tests". Girls were significantly better than boys in the recognition of disgust, irrespective from the gender of the stimulus faces, albeit both genders were significantly better in the recognition of disgust in the case of male stimulus faces compared to female stimulus faces. Both boys and girls were significantly better in the recognition of sadness in the case of female stimulus faces compared to male stimulus faces. There was no gender effect (neither participant nor stimulus faces) in the recognition of other emotions. Conduct scores in boys were inversely correlated with the recognition of fear in male stimulus faces (R=-0.439, p<0.05) and with overall emotion recognition in male stimulus faces (R=-0.558, p<0.01). In girls, conduct scores were shown a tendency for positive correlation with disgust recognition in female stimulus faces (R=0.376, p<0.07). A gender-specific interaction between the recognition of emotions and antisocial developmentalpathway is suggested.
More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder
Goghari, Vina M; Sponheim, Scott R
2012-01-01
Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816
Mohan, S N; Mukhtar, F; Jobson, L
2016-01-01
Introduction Depression is a mood disorder that affects a significant proportion of the population worldwide. In Malaysia and Australia, the number of people diagnosed with depression is on the rise. It has been found that impairments in emotion processing and emotion regulation play a role in the development and maintenance of depression. This study is based on Matsumoto and Hwang's biocultural model of emotion and Triandis' Subjective Culture model. It aims to investigate the influence of culture on emotion processing among Malaysians and Australians with and without major depressive disorder (MDD). Methods and analysis This study will adopt a between-group design. Participants will include Malaysian Malays and Caucasian Australians with and without MDD (N=320). There will be four tasks involved in this study, namely: (1) the facial emotion recognition task, (2) the biological motion task, (3) the subjective experience task and (4) the emotion meaning task. It is hypothesised that there will be cultural differences in how participants with and without MDD respond to these emotion tasks and that, pan-culturally, MDD will influence accuracy rates in the facial emotion recognition task and the biological motion task. Ethics and dissemination This study is approved by the Universiti Putra Malaysia Research Ethics Committee (JKEUPM) and the Monash University Human Research Ethics Committee (MUHREC). Permission to conduct the study has also been obtained from the National Medical Research Register (NMRR; NMRR-15-2314-26919). On completion of the study, data will be kept by Universiti Putra Malaysia for a specific period of time before they are destroyed. Data will be published in a collective manner in the form of journal articles with no reference to a specific individual. PMID:27798019
Öztürk, Ahmet; Kiliç, Alperen; Deveci, Erdem; Kirpinar, İsmet
2016-01-01
Background The concept of facial emotion recognition is well established in various neuropsychiatric disorders. Although emotional disturbances are strongly associated with somatoform disorders, there are a restricted number of studies that have investigated facial emotion recognition in somatoform disorders. Furthermore, there have been no studies that have regarded this issue using the new diagnostic criteria for somatoform disorders as somatic symptoms and related disorders (SSD). In this study, we aimed to compare the factors of facial emotion recognition between patients with SSD and age- and sex-matched healthy controls (HC) and to retest and investigate the factors of facial emotion recognition using the new criteria for SSD. Patients and methods After applying the inclusion and exclusion criteria, 54 patients who were diagnosed with SSD according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria and 46 age- and sex-matched HC were selected to participate in the present study. Facial emotion recognition, alexithymia, and the status of anxiety and depression were compared between the groups. Results Patients with SSD had significantly decreased scores of facial emotion for fear faces, disgust faces, and neutral faces compared with age- and sex-matched HC (t=−2.88, P=0.005; t=−2.86, P=0.005; and t=−2.56, P=0.009, respectively). After eliminating the effects of alexithymia and depressive and anxious states, the groups were found to be similar in terms of their responses to facial emotion and mean reaction time to facial emotions. Discussion Although there have been limited numbers of studies that have examined the recognition of facial emotion in patients with somatoform disorders, our study is the first to investigate facial recognition in patients with SSD diagnosed according to the DSM-5 criteria. Recognition of facial emotion was found to be disturbed in patients with SSD. However, our findings suggest that disturbances in facial recognition were significantly associated with alexithymia and the status of depression and anxiety, which is consistent with the previous studies. Further studies are needed to highlight the associations between facial emotion recognition and SSD. PMID:27199559
Family environment influences emotion recognition following paediatric traumatic brain injury.
Schmidt, Adam T; Orsten, Kimberley D; Hanten, Gerri R; Li, Xiaoqi; Levin, Harvey S
2010-01-01
This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Findings suggest family functioning variables--especially financial resources--can influence performance on an emotional processing task following TBI in children.
Chaspari, Theodora; Soldatos, Constantin; Maragos, Petros
2015-01-01
The development of ecologically valid procedures for collecting reliable and unbiased emotional data towards computer interfaces with social and affective intelligence targeting patients with mental disorders. Following its development, presented with, the Athens Emotional States Inventory (AESI) proposes the design, recording and validation of an audiovisual database for five emotional states: anger, fear, joy, sadness and neutral. The items of the AESI consist of sentences each having content indicative of the corresponding emotion. Emotional content was assessed through a survey of 40 young participants with a questionnaire following the Latin square design. The emotional sentences that were correctly identified by 85% of the participants were recorded in a soundproof room with microphones and cameras. A preliminary validation of AESI is performed through automatic emotion recognition experiments from speech. The resulting database contains 696 recorded utterances in Greek language by 20 native speakers and has a total duration of approximately 28 min. Speech classification results yield accuracy up to 75.15% for automatically recognizing the emotions in AESI. These results indicate the usefulness of our approach for collecting emotional data with reliable content, balanced across classes and with reduced environmental variability.
Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo
2016-01-01
Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.
Rosenberg, Hannah; McDonald, Skye; Dethier, Marie; Kessels, Roy P C; Westbrook, R Frederick
2014-11-01
Many individuals who sustain moderate-severe traumatic brain injuries (TBI) are poor at recognizing emotional expressions, with a greater impairment in recognizing negative (e.g., fear, disgust, sadness, and anger) than positive emotions (e.g., happiness and surprise). It has been questioned whether this "valence effect" might be an artifact of the wide use of static facial emotion stimuli (usually full-blown expressions) which differ in difficulty rather than a real consequence of brain impairment. This study aimed to investigate the valence effect in TBI, while examining emotion recognition across different intensities (low, medium, and high). Twenty-seven individuals with TBI and 28 matched control participants were tested on the Emotion Recognition Task (ERT). The TBI group was more impaired in overall emotion recognition, and less accurate recognizing negative emotions. However, examining the performance across the different intensities indicated that this difference was driven by some emotions (e.g., happiness) being much easier to recognize than others (e.g., fear and surprise). Our findings indicate that individuals with TBI have an overall deficit in facial emotion recognition, and that both people with TBI and control participants found some emotions more difficult than others. These results suggest that conventional measures of facial affect recognition that do not examine variance in the difficulty of emotions may produce erroneous conclusions about differential impairment. They also cast doubt on the notion that dissociable neural pathways underlie the recognition of positive and negative emotions, which are differentially affected by TBI and potentially other neurological or psychiatric disorders.
Basu, Anamitra; Mandal, Manas K
2004-07-01
The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.
Conduct symptoms and emotion recognition in adolescent boys with externalization problems.
Aspan, Nikoletta; Vida, Peter; Gadoros, Julia; Halasz, Jozsef
2013-01-01
In adults with antisocial personality disorder, marked alterations in the recognition of facial affect were described. Less consistent data are available on the emotion recognition in adolescents with externalization problems. The aim of the present study was to assess the relation between the recognition of emotions and conduct symptoms in adolescent boys with externalization problems. Adolescent boys with externalization problems referred to Vadaskert Child Psychiatry Hospital participated in the study after informed consent (N = 114, 11-17 years, mean = 13.4). The conduct problems scale of the strengths and difficulties questionnaire (parent and self-report) was used. The performance in a facial emotion recognition test was assessed. Conduct problems score (parent and self-report) was inversely correlated with the overall emotion recognition. In the self-report, conduct problems score was inversely correlated with the recognition of anger, fear, and sadness. Adolescents with high conduct problems scores were significantly worse in the recognition of fear, sadness, and overall recognition than adolescents with low conduct scores, irrespective of age and IQ. Our results suggest that impaired emotion recognition is dimensionally related to conduct problems and might have importance in the development of antisocial behavior.
Buratto, Luciano G.; Pottage, Claire L.; Brown, Charity; Morrison, Catriona M.; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated. PMID:25330251
Buratto, Luciano G; Pottage, Claire L; Brown, Charity; Morrison, Catriona M; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated.
Effectiveness of Emotion Recognition Training for Young Children with Developmental Delays
ERIC Educational Resources Information Center
Downs, Andrew; Strand, Paul
2008-01-01
Emotion recognition is a basic skill that is thought to facilitate development of social and emotional competence. There is little research available examining whether therapeutic or instructional interventions can improve the emotion recognition skill of young children with various developmental disabilities. Sixteen preschool children with…
Body Emotion Recognition Disproportionately Depends on Vertical Orientations during Childhood
ERIC Educational Resources Information Center
Balas, Benjamin; Auen, Amanda; Saville, Alyson; Schmidt, Jamie
2018-01-01
Children's ability to recognize emotional expressions from faces and bodies develops during childhood. However, the low-level features that support accurate body emotion recognition during development have not been well characterized. This is in marked contrast to facial emotion recognition, which is known to depend upon specific spatial frequency…
Towards an architecture of a hybrid BCI based on SSVEP-BCI and passive-BCI.
Cotrina, Anibal; Benevides, Alessandro; Ferreira, Andre; Bastos, Teodiano; Castillo, Javier; Menezes, Maria Luiza; Pereira, Carlos
2014-01-01
Recent decades have seen BCI applications as a novel and promising new channel of communication, control and entertainment for disabled and healthy people. However, BCI technology can be prone to errors due to the basic emotional state of the user: the performance of reactive and active BCIs decrease when user becomes stressed or bored, for example. Passive-BCI is a recent approach that fuses BCI technology with cognitive monitoring, providing valuable information about the user's intentions, the situational interpretations and mainly the emotional state. In this work, an architecture composed by passive-BCI co-working with SSVEP-BCI is proposed, with the aim of improving the performance of the reactive-BCI. The possibility of adjusting recognition characteristics of SSVEP-BCIs using a passive-BCI output is evaluated. In this sense, two ways to recover the accuracy of SSVEP are presented in this paper: 1) Adjusting of Amplitude of the SSVEP and 2) Adjusting of Frequency of the SSVEP response. The results are promising, because accuracy of SSVEP-BCI can be recovered in the case that it was reduced by the BCI user's emotional state.
Valenza, Gaetano; Citi, Luca; Gentili, Claudio; Lanata, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2015-01-01
The analysis of cognitive and autonomic responses to emotionally relevant stimuli could provide a viable solution for the automatic recognition of different mood states, both in normal and pathological conditions. In this study, we present a methodological application describing a novel system based on wearable textile technology and instantaneous nonlinear heart rate variability assessment, able to characterize the autonomic status of bipolar patients by considering only electrocardiogram recordings. As a proof of this concept, our study presents results obtained from eight bipolar patients during their normal daily activities and being elicited according to a specific emotional protocol through the presentation of emotionally relevant pictures. Linear and nonlinear features were computed using a novel point-process-based nonlinear autoregressive integrative model and compared with traditional algorithmic methods. The estimated indices were used as the input of a multilayer perceptron to discriminate the depressive from the euthymic status. Results show that our system achieves much higher accuracy than the traditional techniques. Moreover, the inclusion of instantaneous higher order spectra features significantly improves the accuracy in successfully recognizing depression from euthymia.
Four not six: Revealing culturally common facial expressions of emotion.
Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G
2016-06-01
As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Major depressive disorder skews the recognition of emotional prosody.
Péron, Julie; El Tamer, Sarah; Grandjean, Didier; Leray, Emmanuelle; Travers, David; Drapier, Dominique; Vérin, Marc; Millet, Bruno
2011-06-01
Major depressive disorder (MDD) is associated with abnormalities in the recognition of emotional stimuli. MDD patients ascribe more negative emotion but also less positive emotion to facial expressions, suggesting blunted responsiveness to positive emotional stimuli. To ascertain whether these emotional biases are modality-specific, we examined the effects of MDD on the recognition of emotions from voices using a paradigm designed to capture subtle effects of biases. Twenty-one MDD patients and 21 healthy controls (HC) underwent clinical and neuropsychological assessments, followed by a paradigm featuring pseudowords spoken by actors in five types of emotional prosody, rated on continuous scales. Overall, MDD patients performed more poorly than HC, displaying significantly impaired recognition of fear, happiness and sadness. Compared with HC, they rated fear significantly more highly when listening to anger stimuli. They also displayed a bias toward surprise, rating it far higher when they heard sad or fearful utterances. Furthermore, for happiness stimuli, MDD patients gave higher ratings for negative emotions (fear and sadness). A multiple regression model on recognition of emotional prosody in MDD patients showed that the best fit was achieved using the executive functioning (categorical fluency, number of errors in the MCST, and TMT B-A) and the total score of the Montgomery-Asberg Depression Rating Scale. Impaired recognition of emotions would appear not to be specific to the visual modality but to be present also when emotions are expressed vocally, this impairment being related to depression severity and dysexecutive syndrome. MDD seems to skew the recognition of emotional prosody toward negative emotional stimuli and the blunting of positive emotion appears not to be restricted to the visual modality. Copyright © 2011 Elsevier Inc. All rights reserved.
Specific Impairments in the Recognition of Emotional Facial Expressions in Parkinson’s Disease
Clark, Uraina S.; Neargarder, Sandy; Cronin-Golomb, Alice
2008-01-01
Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD. PMID:18485422
McIntosh, Lindsey G; Mannava, Sishir; Camalier, Corrie R; Folley, Bradley S; Albritton, Aaron; Konrad, Peter E; Charles, David; Park, Sohee; Neimat, Joseph S
2014-01-01
Parkinson's disease (PD) is traditionally regarded as a neurodegenerative movement disorder, however, nigrostriatal dopaminergic degeneration is also thought to disrupt non-motor loops connecting basal ganglia to areas in frontal cortex involved in cognition and emotion processing. PD patients are impaired on tests of emotion recognition, but it is difficult to disentangle this deficit from the more general cognitive dysfunction that frequently accompanies disease progression. Testing for emotion recognition deficits early in the disease course, prior to cognitive decline, better assesses the sensitivity of these non-motor corticobasal ganglia-thalamocortical loops involved in emotion processing to early degenerative change in basal ganglia circuits. In addition, contrasting this with a group of healthy aging individuals demonstrates changes in emotion processing specific to the degeneration of basal ganglia circuitry in PD. Early PD patients (EPD) were recruited from a randomized clinical trial testing the safety and tolerability of deep brain stimulation (DBS) of the subthalamic nucleus (STN-DBS) in early-staged PD. EPD patients were previously randomized to receive optimal drug therapy only (ODT), or drug therapy plus STN-DBS (ODT + DBS). Matched healthy elderly controls (HEC) and young controls (HYC) also participated in this study. Participants completed two control tasks and three emotion recognition tests that varied in stimulus domain. EPD patients were impaired on all emotion recognition tasks compared to HEC. Neither therapy type (ODT or ODT + DBS) nor therapy state (ON/OFF) altered emotion recognition performance in this study. Finally, HEC were impaired on vocal emotion recognition relative to HYC, suggesting a decline related to healthy aging. This study supports the existence of impaired emotion recognition early in the PD course, implicating an early disruption of fronto-striatal loops mediating emotional function.
Inconsistent emotion recognition deficits across stimulus modalities in Huntington׳s disease.
Rees, Elin M; Farmer, Ruth; Cole, James H; Henley, Susie M D; Sprengelmeyer, Reiner; Frost, Chris; Scahill, Rachael I; Hobbs, Nicola Z; Tabrizi, Sarah J
2014-11-01
Recognition of negative emotions is impaired in Huntington׳s Disease (HD). It is unclear whether these emotion-specific problems are driven by dissociable cognitive deficits, emotion complexity, test cue difficulty, or visuoperceptual impairments. This study set out to further characterise emotion recognition in HD by comparing patterns of deficits across stimulus modalities; notably including for the first time in HD, the more ecologically and clinically relevant modality of film clips portraying dynamic facial expressions. Fifteen early HD and 17 control participants were tested on emotion recognition from static facial photographs, non-verbal vocal expressions and one second dynamic film clips, all depicting different emotions. Statistically significant evidence of impairment of anger, disgust and fear recognition was seen in HD participants compared with healthy controls across multiple stimulus modalities. The extent of the impairment, as measured by the difference in the number of errors made between HD participants and controls, differed according to the combination of emotion and modality (p=0.013, interaction test). The largest between-group difference was seen in the recognition of anger from film clips. Consistent with previous reports, anger, disgust and fear were the most poorly recognised emotions by the HD group. This impairment did not appear to be due to task demands or expression complexity as the pattern of between-group differences did not correspond to the pattern of errors made by either group; implicating emotion-specific cognitive processing pathology. There was however evidence that the extent of emotion recognition deficits significantly differed between stimulus modalities. The implications in terms of designing future tests of emotion recognition and care giving are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gold, Rinat; Butler, Pamela; Revheim, Nadine; Leitman, David; Hansen, John A.; Gur, Ruben; Kantrowitz, Joshua T.; Laukka, Petri; Juslin, Patrik N.; Silipo, Gail S.; Javitt, Daniel C.
2013-01-01
Objective Schizophrenia is associated with deficits in ability to perceive emotion based upon tone of voice. The basis for this deficit, however, remains unclear and assessment batteries remain limited. We evaluated performance in schizophrenia on a novel voice emotion recognition battery with well characterized physical features, relative to impairments in more general emotional and cognitive function. Methods We studied in a primary sample of 92 patients relative to 73 controls. Stimuli were characterized according to both intended emotion and physical features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched controls, and 188 general comparison subjects. Results Patients showed significant, large effect size deficits in voice emotion recognition (F=25.4, p<.00001, d=1.1), and were preferentially impaired in recognition of emotion based upon pitch-, but not intensity-features (group X feature interaction: F=7.79, p=.006). Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=56, p<.0001) and within (r=.47, p<.0001) group. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample. Conclusions The present study demonstrates impairments in auditory emotion recognition in schizophrenia relative to acoustic features of underlying stimuli. Furthermore, it provides tools and highlights the need for greater attention to physical features of stimuli used for study of social cognition in neuropsychiatric disorders. PMID:22362394
Impairment in the recognition of emotion across different media following traumatic brain injury.
Williams, Claire; Wood, Rodger Ll
2010-02-01
The current study examined emotion recognition following traumatic brain injury (TBI) and examined whether performance differed according to the affective valence and type of media presentation of the stimuli. A total of 64 patients with TBI and matched controls completed the Emotion Evaluation Test (EET) and Ekman 60 Faces Test (E-60-FT). Patients with TBI also completed measures of information processing and verbal ability. Results revealed that the TBI group were significantly impaired compared to controls when recognizing emotion on the EET and E-60-FT. A significant main effect of valence was found in both groups, with poor recognition of negative emotions. However, the difference between the recognition of positive and negative emotions was larger in the TBI group. The TBI group were also more accurate recognizing emotion displayed in audiovisual media (EET) than that displayed in still media (E-60-FT). No significant relationship was obtained between emotion recognition tasks and information-processing speed. A significant positive relationship was found between the E-60-FT and one measure of verbal ability. These findings support models of emotion that specify separate neurological pathways for certain emotions and different media and confirm that patients with TBI are vulnerable to experiencing emotion recognition difficulties.
Facial recognition deficits as a potential endophenotype in bipolar disorder.
Vierck, Esther; Porter, Richard J; Joyce, Peter R
2015-11-30
Bipolar disorder (BD) is considered a highly heritable and genetically complex disorder. Several cognitive functions, such as executive functions and verbal memory have been suggested as promising candidates for endophenotypes. Although there is evidence for deficits in facial emotion recognition in individuals with BD, studies investigating these functions as endophenotypes are rare. The current study investigates emotion recognition as a potential endophenotype in BD by comparing 36 BD participants, 24 of their 1st degree relatives and 40 healthy control participants in a computerised facial emotion recognition task. Group differences were evaluated using repeated measurement analysis of co-variance with age as a covariate. Results revealed slowed emotion recognition for both BD and their relatives. Furthermore, BD participants were less accurate than healthy controls in their recognition of emotion expressions. We found no evidence of emotion specific differences between groups. Our results provide evidence for facial recognition as a potential endophenotype in BD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Clark, Uraina S.; Walker, Keenan A.; Cohen, Ronald A.; Devlin, Kathryn N.; Folkers, Anna M.; Pina, Mathew M.; Tashima, Karen T.
2015-01-01
Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV− associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities. PMID:25744868
Utterance independent bimodal emotion recognition in spontaneous communication
NASA Astrophysics Data System (ADS)
Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng
2011-12-01
Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.
ERIC Educational Resources Information Center
Rojahn, Johannes; And Others
1995-01-01
This literature review discusses 21 studies on facial emotion recognition by persons with mental retardation in terms of methodological characteristics, stimulus material, salient variables and their relation to recognition tasks, and emotion recognition deficits in mental retardation. A table provides comparative data on all 21 studies. (DB)
Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.
Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth
2016-03-01
Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.
The involvement of emotion recognition in affective theory of mind.
Mier, Daniela; Lis, Stefanie; Neuthe, Kerstin; Sauer, Carina; Esslinger, Christine; Gallhofer, Bernd; Kirsch, Peter
2010-11-01
This study was conducted to explore the relationship between emotion recognition and affective Theory of Mind (ToM). Forty subjects performed a facial emotion recognition and an emotional intention recognition task (affective ToM) in an event-related fMRI study. Conjunction analysis revealed overlapping activation during both tasks. Activation in some of these conjunctly activated regions was even stronger during affective ToM than during emotion recognition, namely in the inferior frontal gyrus, the superior temporal sulcus, the temporal pole, and the amygdala. In contrast to previous studies investigating ToM, we found no activation in the anterior cingulate, commonly assumed as the key region for ToM. The results point to a close relationship of emotion recognition and affective ToM and can be interpreted as evidence for the assumption that at least basal forms of ToM occur by an embodied, non-cognitive process. Copyright © 2010 Society for Psychophysiological Research.
Impaired Emotion Recognition in Music in Parkinson's Disease
ERIC Educational Resources Information Center
van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.
2010-01-01
Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction…
Emotion Recognition and Visual-Scan Paths in Fragile X Syndrome
ERIC Educational Resources Information Center
Shaw, Tracey A.; Porter, Melanie A.
2013-01-01
This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored.…
Benito, Adolfo; Lahera, Guillermo; Herrera, Sara; Muncharaz, Ramón; Benito, Guillermo; Fernández-Liria, Alberto; Montes, José Manuel
2013-01-01
To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD). Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40), identification (Facial Emotion Identification Test - FEIT), and discrimination (Facial Emotion Discrimination Test - FEDT) of facial emotions, as well as a theory of mind (ToM) verbal test (Hinting Task). Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Patients with BD scored significantly lower than controls on recognition (ER40), identification (FEIT), and discrimination (FEDT) of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST) was found. These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.
Whipple, Christina M.; Gfeller, Kate; Driscoll, Virginia; Oleson, Jacob; McGregor, Karla
2014-01-01
Background Effective musical communication requires conveyance of the intended message in a manner perceptible to the receiver. Communication disorders that impair transmitting or decoding of structural features of music (e.g., pitch, timbre) and/or symbolic representation may result in atypical musical communication, which can have a negative impact on music therapy interventions. Objective This study compared recognition of symbolic representation of emotions or movements in music by two groups of children with different communicative characteristics: severe to profound hearing loss (using cochlear implants [CI]) and autism spectrum disorder (ASD). Their responses were compared to those of children with typical-development and normal hearing (TD-NH). Accuracy was examined as a function of communicative status, emotional or movement category, and individual characteristics. Methods Participants listened to recorded musical excerpts conveying emotions or movements and matched them with labels. Measures relevant to auditory and/or language function were also gathered. Results There was no significant difference between the ASD and TD-NH groups in identification of musical emotions or movements. However, the CI group was significantly less accurate than the other two groups in identification of both emotions and movements. Mixed effects logistic regression revealed different patterns of accuracy for specific emotions as a function of group. Conclusion Conveyance of emotions or movements through music may be decoded differently by persons with different types of communication disorders. Because music is the primary therapeutic tool in music therapy sessions, clinicians should consider these differential abilities when selecting music for clinical interventions focusing on emotions or movement. PMID:25691513
Callousness and affective face processing in adults: Behavioral and brain-potential indicators.
Brislin, Sarah J; Yancey, James R; Perkins, Emily R; Palumbo, Isabella M; Drislane, Laura E; Salekin, Randall T; Fanti, Kostas A; Kimonis, Eva R; Frick, Paul J; Blair, R James R; Patrick, Christopher J
2018-03-01
The investigation of callous-unemotional (CU) traits has been central to contemporary research on child behavior problems, and served as the impetus for inclusion of a specifier for conduct disorder in the latest edition of the official psychiatric diagnostic system. Here, we report results from 2 studies that evaluated the construct validity of callousness as assessed in adults, by testing for affiliated deficits in behavioral and neural processing of fearful faces, as have been shown in youthful samples. We hypothesized that scores on an established measure of callousness would predict reduced recognition accuracy and diminished electocortical reactivity for fearful faces in adult participants. In Study 1, 66 undergraduate participants performed an emotion recognition task in which they viewed affective faces of different types and indicated the emotion expressed by each. In Study 2, electrocortical data were collected from 254 adult twins during viewing of fearful and neutral face stimuli, and scored for event-related response components. Analyses of Study 1 data revealed that higher callousness was associated with decreased recognition accuracy for fearful faces specifically. In Study 2, callousness was associated with reduced amplitude of both N170 and P200 responses to fearful faces. Current findings demonstrate for the first time that callousness in adults is associated with both behavioral and physiological deficits in the processing of fearful faces. These findings support the validity of the CU construct with adults and highlight the possibility of a multidomain measurement framework for continued study of this important clinical construct. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Brain correlates of musical and facial emotion recognition: evidence from the dementias.
Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R
2012-07-01
The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.
Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
Family environment influences emotion recognition following paediatric traumatic brain injury
SCHMIDT, ADAM T.; ORSTEN, KIMBERLEY D.; HANTEN, GERRI R.; LI, XIAOQI; LEVIN, HARVEY S.
2011-01-01
Objective This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). Methods A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Conclusions Findings suggest family functioning variables—especially financial resources—can influence performance on an emotional processing task following TBI in children. PMID:21058900
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-01-01
Background: This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. Methods: We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. Results: The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). Conclusion: This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals. PMID:26622979
Hosseini, Seyyed Abed; Khalilzadeh, Mohammad Ali; Naghibi-Sistani, Mohammad Bagher; Homam, Seyyed Mehran
2015-07-06
This paper proposes a new emotional stress assessment system using multi-modal bio-signals. Electroencephalogram (EEG) is the reflection of brain activity and is widely used in clinical diagnosis and biomedical research. We design an efficient acquisition protocol to acquire the EEG signals in five channels (FP1, FP2, T3, T4 and Pz) and peripheral signals such as blood volume pulse, skin conductance (SC) and respiration, under images induction (calm-neutral and negatively excited) for the participants. The visual stimuli images are selected from the subset International Affective Picture System database. The qualitative and quantitative evaluation of peripheral signals are used to select suitable segments of EEG signals for improving the accuracy of signal labeling according to emotional stress states. After pre-processing, wavelet coefficients, fractal dimension, and Lempel-Ziv complexity are used to extract the features of the EEG signals. The vast number of features leads to the problem of dimensionality, which is solved using the genetic algorithm as a feature selection method. The results show that the average classification accuracy is 89.6% for two categories of emotional stress states using the support vector machine (SVM). This is a great improvement in results compared to other similar researches. We achieve a noticeable improvement of 11.3% in accuracy using SVM classifier, in compared to previous studies. Therefore, a new fusion between EEG and peripheral signals are more robust in comparison to the separate signals.
Herniman, Sarah E; Allott, Kelly A; Killackey, Eóin; Hester, Robert; Cotton, Sue M
2017-01-15
Comorbid depression is common in first-episode schizophrenia spectrum (FES) disorders. Both depression and FES are associated with significant deficits in facial and prosody emotion recognition performance. However, it remains unclear whether people with FES and comorbid depression, compared to those without comorbid depression, have overall poorer emotion recognition, or instead, a different pattern of emotion recognition deficits. The aim of this study was to compare facial and prosody emotion recognition performance between those with and without comorbid depression in FES. This study involved secondary analysis of baseline data from a randomized controlled trial of vocational intervention for young people with first-episode psychosis (N=82; age range: 15-25 years). Those with comorbid depression (n=24) had more accurate recognition of sadness in faces compared to those without comorbid depression. Severity of depressive symptoms was also associated with more accurate recognition of sadness in faces. Such results did not recur for prosody emotion recognition. In addition to the cross-sectional design, limitations of this study include the absence of facial and prosodic recognition of neutral emotions. Findings indicate a mood congruent negative bias in facial emotion recognition in those with comorbid depression and FES, and provide support for cognitive theories of depression that emphasise the role of such biases in the development and maintenance of depression. Longitudinal research is needed to determine whether mood-congruent negative biases are implicated in the development and maintenance of depression in FES, or whether such biases are simply markers of depressed state. Copyright © 2016 Elsevier B.V. All rights reserved.
Feeser, Melanie; Fan, Yan; Weigand, Anne; Hahn, Adam; Gärtner, Matti; Aust, Sabine; Böker, Heinz; Bajbouj, Malek; Grimm, Simone
2014-12-01
Previous studies have shown that oxytocin (OXT) enhances social cognitive processes. It has also been demonstrated that OXT does not uniformly facilitate social cognition. The effects of OXT administration strongly depend on the exposure to stressful experiences in early life. Emotional facial recognition is crucial for social cognition. However, no study has yet examined how the effects of OXT on the ability to identify emotional faces are altered by early life stress (ELS) experiences. Given the role of OXT in modulating social motivational processes, we specifically aimed to investigate its effects on the recognition of approach- and avoidance-related facial emotions. In a double-blind, between-subjects, placebo-controlled design, 82 male participants performed an emotion recognition task with faces taken from the "Karolinska Directed Emotional Faces" set. We clustered the six basic emotions along the dimensions approach (happy, surprise, anger) and avoidance (fear, sadness, disgust). ELS was assessed with the Childhood Trauma Questionnaire (CTQ). Our results showed that OXT improved the ability to recognize avoidance-related emotional faces as compared to approach-related emotional faces. Whereas the performance for avoidance-related emotions in participants with higher ELS scores was comparable in both OXT and placebo condition, OXT enhanced emotion recognition in participants with lower ELS scores. Independent of OXT administration, we observed increased emotion recognition for avoidance-related faces in participants with high ELS scores. Our findings suggest that the investigation of OXT on social recognition requires a broad approach that takes ELS experiences as well as motivational processes into account.
Baez, Sandra; Marengo, Juan; Perez, Ana; Huepe, David; Font, Fernanda Giralt; Rial, Veronica; Gonzalez-Gadea, María Luz; Manes, Facundo; Ibanez, Agustin
2015-09-01
Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task-dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real-life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs. © 2014 The British Psychological Society.
Park, Soowon; Kim, Taehoon; Shin, Seong A; Kim, Yu Kyeong; Sohn, Bo Kyung; Park, Hyeon-Ju; Youn, Jung-Hae; Lee, Jun-Young
2017-01-01
Background: Facial emotion recognition (FER) is impaired in individuals with frontotemporal dementia (FTD) and Alzheimer’s disease (AD) when compared to healthy older adults. Since deficits in emotion recognition are closely related to caregiver burden or social interactions, researchers have fundamental interest in FER performance in patients with dementia. Purpose: The purpose of this study was to identify the performance profiles of six facial emotions (i.e., fear, anger, disgust, sadness, surprise, and happiness) and neutral faces measured among Korean healthy control (HCs), and those with mild cognitive impairment (MCI), AD, and FTD. Additionally, the neuroanatomical correlates of facial emotions were investigated. Methods: A total of 110 (33 HC, 32 MCI, 32 AD, 13 FTD) older adult participants were recruited from two different medical centers in metropolitan areas of South Korea. These individuals underwent an FER test that was used to assess the recognition of emotions or absence of emotion (neutral) in 35 facial stimuli. Repeated measures two-way analyses of variance were used to examine the distinct profiles of emotional recognition among the four groups. We also performed brain imaging and voxel-based morphometry (VBM) on the participants to examine the associations between FER scores and gray matter volume. Results: The mean score of negative emotion recognition (i.e., fear, anger, disgust, and sadness) clearly discriminated FTD participants from individuals with MCI and AD and HC [F(3,106) = 10.829, p < 0.001, η2 = 0.235], whereas the mean score of positive emotion recognition (i.e., surprise and happiness) did not. A VBM analysis showed negative emotions were correlated with gray matter volume of anterior temporal regions, whereas positive emotions were related to gray matter volume of fronto-parietal regions. Conclusion: Impairment of negative FER in patients with FTD is cross-cultural. The discrete neural correlates of FER indicate that emotional recognition processing is a multi-modal system in the brain. Focusing on the negative emotion recognition is a more effective way to discriminate healthy aging, MCI, and AD from FTD in older Korean adults. PMID:29249960
ERIC Educational Resources Information Center
Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2015-01-01
Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…
Omar, Rohani; Henley, Susie M.D.; Bartlett, Jonathan W.; Hailstone, Julia C.; Gordon, Elizabeth; Sauter, Disa A.; Frost, Chris; Scott, Sophie K.; Warren, Jason D.
2011-01-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. PMID:21385617
Omar, Rohani; Henley, Susie M D; Bartlett, Jonathan W; Hailstone, Julia C; Gordon, Elizabeth; Sauter, Disa A; Frost, Chris; Scott, Sophie K; Warren, Jason D
2011-06-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. Copyright © 2011 Elsevier Inc. All rights reserved.
Recognition of emotions in autism: a formal meta-analysis.
Uljarevic, Mirko; Hamilton, Antonia
2013-07-01
Determining the integrity of emotion recognition in autistic spectrum disorder is important to our theoretical understanding of autism and to teaching social skills. Previous studies have reported both positive and negative results. Here, we take a formal meta-analytic approach, bringing together data from 48 papers testing over 980 participants with autism. Results show there is an emotion recognition difficulty in autism, with a mean effect size of 0.80 which reduces to 0.41 when a correction for publication bias is applied. Recognition of happiness was only marginally impaired in autism, but recognition of fear was marginally worse than recognition of happiness. This meta-analysis provides an opportunity to survey the state of emotion recognition research in autism and to outline potential future directions.
ERIC Educational Resources Information Center
Golan, Ofer; Gordon, Ilanit; Fichman, Keren; Keinan, Giora
2018-01-01
Children with ASD show emotion recognition difficulties, as part of their social communication deficits. We examined facial emotion recognition (FER) in intellectually disabled children with ASD and in younger typically developing (TD) controls, matched on mental age. Our emotion-matching paradigm employed three different modalities: facial, vocal…
ERIC Educational Resources Information Center
Schmidt, Adam T.; Hanten, Gerri R.; Li, Xiaoqi; Orsten, Kimberley D.; Levin, Harvey S.
2010-01-01
Children with closed head injuries often experience significant and persistent disruptions in their social and behavioral functioning. Studies with adults sustaining a traumatic brain injury (TBI) indicate deficits in emotion recognition and suggest that these difficulties may underlie some of the social deficits. The goal of the current study was…
Schultebraucks, Katharina; Deuter, Christian E; Duesenberg, Moritz; Schulze, Lars; Hellmann-Regen, Julian; Domke, Antonia; Lockenvitz, Lisa; Kuehl, Linn K; Otte, Christian; Wingenfeld, Katja
2016-09-01
Selective attention toward emotional cues and emotion recognition of facial expressions are important aspects of social cognition. Stress modulates social cognition through cortisol, which acts on glucocorticoid (GR) and mineralocorticoid receptors (MR) in the brain. We examined the role of MR activation on attentional bias toward emotional cues and on emotion recognition. We included 40 healthy young women and 40 healthy young men (mean age 23.9 ± 3.3), who either received 0.4 mg of the MR agonist fludrocortisone or placebo. A dot-probe paradigm was used to test for attentional biases toward emotional cues (happy and sad faces). Moreover, we used a facial emotion recognition task to investigate the ability to recognize emotional valence (anger and sadness) from facial expression in four graded categories of emotional intensity (20, 30, 40, and 80 %). In the emotional dot-probe task, we found a main effect of treatment and a treatment × valence interaction. Post hoc analyses revealed an attentional bias away from sad faces after placebo intake and a shift in selective attention toward sad faces compared to placebo. We found no attentional bias toward happy faces after fludrocortisone or placebo intake. In the facial emotion recognition task, there was no main effect of treatment. MR stimulation seems to be important in modulating quick, automatic emotional processing, i.e., a shift in selective attention toward negative emotional cues. Our results confirm and extend previous findings of MR function. However, we did not find an effect of MR stimulation on emotion recognition.
Martínez-Castilla, Pastora; Burt, Michael; Borgatti, Renato; Gagliardi, Chiara
2015-01-01
In this study both the matching and developmental trajectories approaches were used to clarify questions that remain open in the literature on facial emotion recognition in Williams syndrome (WS) and Down syndrome (DS). The matching approach showed that individuals with WS or DS exhibit neither proficiency for the expression of happiness nor specific impairments for negative emotions. Instead, they present the same pattern of emotion recognition as typically developing (TD) individuals. Thus, the better performance on the recognition of positive compared to negative emotions usually reported in WS and DS is not specific of these populations but seems to represent a typical pattern. Prior studies based on the matching approach suggested that the development of facial emotion recognition is delayed in WS and atypical in DS. Nevertheless, and even though performance levels were lower in DS than in WS, the developmental trajectories approach used in this study evidenced that not only individuals with DS but also those with WS present atypical development in facial emotion recognition. Unlike in the TD participants, where developmental changes were observed along with age, in the WS and DS groups, the development of facial emotion recognition was static. Both individuals with WS and those with DS reached an early maximum developmental level due to cognitive constraints.
Age differences in right-wing authoritarianism and their relation to emotion recognition.
Ruffman, Ted; Wilson, Marc; Henry, Julie D; Dawson, Abigail; Chen, Yan; Kladnitski, Natalie; Myftari, Ella; Murray, Janice; Halberstadt, Jamin; Hunter, John A
2016-03-01
This study examined the correlates of right-wing authoritarianism (RWA) in older adults. Participants were given tasks measuring emotion recognition, executive functions and fluid IQ and questionnaires measuring RWA, perceived threat and social dominance orientation. Study 1 established higher age-related RWA across the age span in more than 2,600 New Zealanders. Studies 2 to 4 found that threat, education, social dominance and age all predicted unique variance in older adults' RWA, but the most consistent predictor was emotion recognition, predicting unique variance in older adults' RWA independent of all other variables. We argue that older adults' worse emotion recognition is associated with a more general change in social judgment. Expression of extreme attitudes (right- or left-wing) has the potential to antagonize others, but worse emotion recognition means that subtle signals will not be perceived, making the expression of extreme attitudes more likely. Our findings are consistent with other studies showing that worsening emotion recognition underlies age-related declines in verbosity, understanding of social gaffes, and ability to detect lies. Such results indicate that emotion recognition is a core social insight linked to many aspects of social cognition. (c) 2016 APA, all rights reserved).
Facial emotion recognition in patients with focal and diffuse axonal injury.
Yassin, Walid; Callahan, Brandy L; Ubukata, Shiho; Sugihara, Genichi; Murai, Toshiya; Ueda, Keita
2017-01-01
Facial emotion recognition impairment has been well documented in patients with traumatic brain injury. Studies exploring the neural substrates involved in such deficits have implicated specific grey matter structures (e.g. orbitofrontal regions), as well as diffuse white matter damage. Our study aims to clarify whether different types of injuries (i.e. focal vs. diffuse) will lead to different types of impairments on facial emotion recognition tasks, as no study has directly compared these patients. The present study examined performance and response patterns on a facial emotion recognition task in 14 participants with diffuse axonal injury (DAI), 14 with focal injury (FI) and 22 healthy controls. We found that, overall, participants with FI and DAI performed more poorly than controls on the facial emotion recognition task. Further, we observed comparable emotion recognition performance in participants with FI and DAI, despite differences in the nature and distribution of their lesions. However, the rating response pattern between the patient groups was different. This is the first study to show that pure DAI, without gross focal lesions, can independently lead to facial emotion recognition deficits and that rating patterns differ depending on the type and location of trauma.
Familiarity and face emotion recognition in patients with schizophrenia.
Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto
2014-01-01
To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition
NASA Astrophysics Data System (ADS)
Kim, Jonghwa; André, Elisabeth
This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-09-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on basic emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of complex emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general population control group (n = 24). Children with ASC performed lower than controls on the task. Using task scores, more than 87% of the participants were allocated to their group. This new test quantifies complex emotion and mental state recognition in life-like situations. Our findings reveal that children with ASC have residual difficulties in this aspect of empathy. The use of language-based compensatory strategies for emotion recognition is discussed.
Speech emotion recognition methods: A literature review
NASA Astrophysics Data System (ADS)
Basharirad, Babak; Moradhaseli, Mohammadreza
2017-10-01
Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.
[Emotional facial expression recognition impairment in Parkinson disease].
Lachenal-Chevallet, Karine; Bediou, Benoit; Bouvard, Martine; Thobois, Stéphane; Broussolle, Emmanuel; Vighetto, Alain; Krolak-Salmon, Pierre
2006-03-01
some behavioral disturbances observed in Parkinson's disease (PD) could be related to impaired recognition of various social messages particularly emotional facial expressions. facial expression recognition was assessed using morphed faces (five emotions: happiness, fear, anger, disgust, neutral), and compared to gender recognition and general cognitive assessment in 12 patients with Parkinson's disease and 14 controls subjects. facial expression recognition was impaired among patients, whereas gender recognitions, visuo-perceptive capacities and total efficiency were preserved. Post hoc analyses disclosed a deficit for fear and disgust recognition compared to control subjects. the impairment of emotional facial expression recognition in PD appears independent of other cognitive deficits. This impairment may be related to the dopaminergic depletion in basal ganglia and limbic brain regions. They could take a part in psycho-behavioral disorders and particularly in communication disorders observed in Parkinson's disease patients.
Emotion recognition in Parkinson's disease: Static and dynamic factors.
Wasser, Cory I; Evans, Felicity; Kempnich, Clare; Glikmann-Johnston, Yifat; Andrews, Sophie C; Thyagarajan, Dominic; Stout, Julie C
2018-02-01
The authors tested the hypothesis that Parkinson's disease (PD) participants would perform better in an emotion recognition task with dynamic (video) stimuli compared to a task using only static (photograph) stimuli and compared performances on both tasks to healthy control participants. In a within-subjects study, 21 PD participants and 20 age-matched healthy controls performed both static and dynamic emotion recognition tasks. The authors used a 2-way analysis of variance (controlling for individual participant variance) to determine the effect of group (PD, control) on emotion recognition performance in static and dynamic facial recognition tasks. Groups did not significantly differ in their performances on the static and dynamic tasks; however, the trend was suggestive that PD participants performed worse than controls. PD participants may have subtle emotion recognition deficits that are not ameliorated by the addition of contextual cues, similar to those found in everyday scenarios. Consistent with previous literature, the results suggest that PD participants may have underlying emotion recognition deficits, which may impact their social functioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.
2018-03-01
The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.
ERIC Educational Resources Information Center
Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus
2010-01-01
The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…
Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M
2013-01-01
This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.
Alfimova, M V; Golimbet, V E; Korovaitseva, G I; Lezheiko, T V; Abramova, L I; Aksenova, E V; Bolgov, M I
2014-01-01
The 5-HTTLPR SLC6A4 and catechol-o-methyltransferase (COMT) Val158Met polymorphisms are reported to be associated with processing of facial expressions in general population. Impaired recognition of facial expressions that is characteristic of schizophrenia negatively impacts on the social adaptation of the patients. To search for molecular mechanisms of this deficit, we studied main and epistatic effects of 5-HTTLPR and Val158Met polymorphisms on the facial emotion recognition in patients with schizophrenia (n=299) and healthy controls (n=232). The 5-HTTLPR polymorphism was associated with the emotion recognition in patients. The ll-homozygotes recognized facial emotions significantly better compared to those with an s-allele (F=8.00; p=0.005). Although the recognition of facial emotions was correlated with negative symptoms, verbal learning and trait anxiety, these variables did not significantly modified the association. In both groups, no effect of the COMT on the recognition of facial emotions was found.
Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.
Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T
2013-01-01
Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.
Sullivan, Susan; Campbell, Anna; Hutton, Sam B; Ruffman, Ted
2017-05-01
Research indicates that older adults' (≥60 years) emotion recognition is worse than that of young adults, young and older men's emotion recognition is worse than that of young and older women (respectively), older adults' looking at mouths compared with eyes is greater than that of young adults. Nevertheless, previous research has not compared older men's and women's looking at emotion faces so the present study had two aims: (a) to examine whether the tendency to look at mouths is stronger amongst older men compared with older women and (b) to examine whether men's mouth looking correlates with better emotion recognition. We examined the emotion recognition abilities and spontaneous gaze patterns of young (n = 60) and older (n = 58) males and females as they labelled emotion faces. Older men spontaneously looked more to mouths than older women, and older men's looking at mouths correlated with their emotion recognition, whereas women's looking at eyes correlated with their emotion recognition. The findings are discussed in relation to a growing body of research suggesting both age and gender differences in response to emotional stimuli and the differential efficacy of mouth and eyes looking for men and women. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Jarros, Rafaela Behs; Salum, Giovanni Abrahão; Belem da Silva, Cristiano Tschiedel; Toazza, Rudineia; de Abreu Costa, Marianna; Fumagalli de Salles, Jerusa; Manfro, Gisele Gus
2012-02-01
The aim of the present study was to test the ability of adolescents with a current anxiety diagnosis to recognize facial affective expressions, compared to those without an anxiety disorder. Forty cases and 27 controls were selected from a larger cross sectional community sample of adolescents, aged from 10 to 17 years old. Adolescent's facial recognition of six human emotions (sadness, anger, disgust, happy, surprise and fear) and neutral faces was assessed through a facial labeling test using Ekman's Pictures of Facial Affect (POFA). Adolescents with anxiety disorders had a higher mean number of errors in angry faces as compared to controls: 3.1 (SD=1.13) vs. 2.5 (SD=2.5), OR=1.72 (CI95% 1.02 to 2.89; p=0.040). However, they named neutral faces more accurately than adolescents without anxiety diagnosis: 15% of cases vs. 37.1% of controls presented at least one error in neutral faces, OR=3.46 (CI95% 1.02 to 11.7; p=0.047). No differences were found considering other human emotions or on the distribution of errors in each emotional face between the groups. Our findings support an anxiety-mediated influence on the recognition of facial expressions in adolescence. These difficulty in recognizing angry faces and more accuracy in naming neutral faces may lead to misinterpretation of social clues and can explain some aspects of the impairment in social interactions in adolescents with anxiety disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.
Aging and Emotion Recognition: Not Just a Losing Matter
Sze, Jocelyn A.; Goodkind, Madeleine S.; Gyurak, Anett; Levenson, Robert W.
2013-01-01
Past studies on emotion recognition and aging have found evidence of age-related decline when emotion recognition was assessed by having participants detect single emotions depicted in static images of full or partial (e.g., eye region) faces. These tests afford good experimental control but do not capture the dynamic nature of real-world emotion recognition, which is often characterized by continuous emotional judgments and dynamic multi-modal stimuli. Research suggests that older adults often perform better under conditions that better mimic real-world social contexts. We assessed emotion recognition in young, middle-aged, and older adults using two traditional methods (single emotion judgments of static images of faces and eyes) and an additional method in which participants made continuous emotion judgments of dynamic, multi-modal stimuli (videotaped interactions between young, middle-aged, and older couples). Results revealed an age by test interaction. Largely consistent with prior research, we found some evidence that older adults performed worse than young adults when judging single emotions from images of faces (for sad and disgust faces only) and eyes (for older eyes only), with middle-aged adults falling in between. In contrast, older adults did better than young adults on the test involving continuous emotion judgments of dyadic interactions, with middle-aged adults falling in between. In tests in which target stimuli differed in age, emotion recognition was not facilitated by an age match between participant and target. These findings are discussed in terms of theoretical and methodological implications for the study of aging and emotional processing. PMID:22823183
Niedtfeld, Inga
2017-07-01
Borderline personality disorder (BPD) is characterized by affective instability and interpersonal problems. In the context of social interaction, impairments in empathy are proposed to result in inadequate social behavior. In contrast to findings of reduced cognitive empathy, some authors suggested enhanced emotional empathy in BPD. It was investigated whether ambiguity leads to decreased cognitive or emotional empathy in BPD. Thirty-four patients with BPD and thirty-two healthy controls were presented with video clips, which were presented through prosody, facial expression, and speech content. Experimental conditions were designed to induce ambiguity by presenting neutral valence in one of these communication channels. Subjects were asked to indicate the actors' emotional valence, their decision confidence, and their own emotional state. BPD patients showed increased emotional empathy when neutral stories comprised nonverbally expressed emotions. In contrast, when all channels were emotional, patients showed lower emotional empathy than healthy controls. Regarding cognitive empathy, there were no significant differences between BPD patients and healthy control subjects in recognition accuracy, but reduced decision confidence in BPD. These results suggest that patients with BPD show altered emotional empathy, experiencing higher rates of emotional contagion when emotions are expressed nonverbally. The latter may contribute to misunderstandings and inadequate social behavior. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Cognitive mechanisms of diazepam administration: a healthy volunteer model of emotional processing.
Pringle, A; Warren, M; Gottwald, J; Cowen, P J; Harmer, C J
2016-06-01
Benzodiazepine drugs continue to be prescribed relatively frequently for anxiety disorders, especially where other treatments have failed or when rapid alleviation of anxiety is imperative. The neuropsychological mechanism by which these drugs act to relieve symptoms, however, remains underspecified. Cognitive accounts of anxiety disorders emphasise hypervigilance for threat in the maintenance of the disorders. The current study examined the effects of 7- or 8-day administration of diazepam in healthy participants (n = 36) on a well-validated battery of tasks measuring emotional processing, including measures of vigilance for threat and physiological responses to threat. Compared to placebo, diazepam reduced vigilant-avoidant patterns of emotional attention (p < 0.01) and reduced general startle responses (p < .05). Diazepam administration had limited effects on emotional processing, enhancing the response to positive vs negative words in the emotional categorisation task (p < .05), modulating emotional memory in terms of false accuracy (p < .05) and slowing the recognition of all facial expressions of emotion (p = .01). These results have implications for our understanding of the cognitive mechanisms of benzodiazepine treatment. The data reported here suggests that diazepam modulates emotional attention, an effect which may be involved in its therapeutic actions in anxiety.
Action Unit Models of Facial Expression of Emotion in the Presence of Speech
Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561
Schmaal, Lianne; Marquand, Andre F; Rhebergen, Didi; van Tol, Marie-José; Ruhé, Henricus G; van der Wee, Nic J A; Veltman, Dick J; Penninx, Brenda W J H
2015-08-15
A chronic course of major depressive disorder (MDD) is associated with profound alterations in brain volumes and emotional and cognitive processing. However, no neurobiological markers have been identified that prospectively predict MDD course trajectories. This study evaluated the prognostic value of different neuroimaging modalities, clinical characteristics, and their combination to classify MDD course trajectories. One hundred eighteen MDD patients underwent structural and functional magnetic resonance imaging (MRI) (emotional facial expressions and executive functioning) and were clinically followed-up at 2 years. Three MDD trajectories (chronic n = 23, gradual improving n = 36, and fast remission n = 59) were identified based on Life Chart Interview measuring the presence of symptoms each month. Gaussian process classifiers were employed to evaluate prognostic value of neuroimaging data and clinical characteristics (including baseline severity, duration, and comorbidity). Chronic patients could be discriminated from patients with more favorable trajectories from neural responses to various emotional faces (up to 73% accuracy) but not from structural MRI and functional MRI related to executive functioning. Chronic patients could also be discriminated from remitted patients based on clinical characteristics (accuracy 69%) but not when age differences between the groups were taken into account. Combining different task contrasts or data sources increased prediction accuracies in some but not all cases. Our findings provide evidence that the prediction of naturalistic course of depression over 2 years is improved by considering neuroimaging data especially derived from neural responses to emotional facial expressions. Neural responses to emotional salient faces more accurately predicted outcome than clinical data. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
McCade, Donna; Savage, Greg; Guastella, Adam; Hickie, Ian B; Lewis, Simon J G; Naismith, Sharon L
2013-09-01
Impaired emotion recognition in dementia is associated with increased patient agitation, behavior management difficulties, and caregiver burden. Emerging evidence supports the presence of very early emotion recognition difficulties in mild cognitive impairment (MCI); however, the relationship between these impairments and psychosocial measures is not yet explored. Emotion recognition abilities of 27 patients with nonamnestic MCI (naMCI), 29 patients with amnestic MCI (aMCI), and 22 control participants were assessed. Self-report measures assessed patient functional disability, while informants rated the degree of burden they experienced. Difficulties in recognizing anger was evident in the amnestic subtype. Although both the patient groups reported greater social functioning disability, compared with the controls, a relationship between social dysfunction and anger recognition was evident only for patients with naMCI. A significant association was found between burden and anger recognition in patients with aMCI. Impaired emotion recognition abilities impact MCI subtypes differentially. Interventions targeted at patients with MCI, and caregivers are warranted.
ERIC Educational Resources Information Center
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-01-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on "basic" emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of "complex" emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general…
Mancuso, Mauro; Magnani, Nadia; Cantagallo, Anna; Rossi, Giulia; Capitani, Donatella; Galletti, Vania; Cardamone, Giuseppe; Robertson, Ian Hamilton
2015-02-01
The aim of our study was to identify the common and separate mechanisms that might underpin emotion recognition impairment in patients with traumatic brain injury (TBI) and schizophrenia (Sz) compared with healthy controls (HCs). We recruited 21 Sz outpatients, 24 severe TBI outpatients, and 38 HCs, and we used eye-tracking to compare facial emotion processing performance. Both Sz and TBI patients were significantly poorer at recognizing facial emotions compared with HC. Sz patients showed a different way of exploring the Pictures of Facial Affects stimuli and were significantly worse in recognition of neutral expressions. Selective or sustained attention deficits in TBI may reduce efficient emotion recognition, whereas in Sz, there is a more strategic deficit underlying the observed problem. There would seem to be scope for adjustment of effective rehabilitative training focused on emotion recognition.
The recognition of facial emotion expressions in Parkinson's disease.
Assogna, Francesca; Pontieri, Francesco E; Caltagirone, Carlo; Spalletta, Gianfranco
2008-11-01
A limited number of studies in Parkinson's Disease (PD) suggest a disturbance of recognition of facial emotion expressions. In particular, disgust recognition impairment has been reported in unmedicated and medicated PD patients. However, the results are rather inconclusive in the definition of the degree and the selectivity of emotion recognition impairment, and an associated impairment of almost all basic facial emotions in PD is also described. Few studies have investigated the relationship with neuropsychiatric and neuropsychological symptoms with mainly negative results. This inconsistency may be due to many different problems, such as emotion assessment, perception deficit, cognitive impairment, behavioral symptoms, illness severity and antiparkinsonian therapy. Here we review the clinical characteristics and neural structures involved in the recognition of specific facial emotion expressions, and the plausible role of dopamine transmission and dopamine replacement therapy in these processes. It is clear that future studies should be directed to clarify all these issues.
Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P.; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as ‘reading’ the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of ‘biological motion’ versus ‘non-biological’ (or ‘scrambled’ motion); or (ii) the recognition of the ‘emotional state’ of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the ‘Reading the Mind in the Eyes Test’ (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree – be related to more basic differences in processing biological motion per se. PMID:21695266
Action and emotion recognition from point light displays: an investigation of gender differences.
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.
Rodrigo-Ruiz, D; Perez-Gonzalez, J C; Cejudo, J
2017-08-16
It has recently been warned that children with attention deficit hyperactivity disorder (ADHD) show a deficit in emotional competence and emotional intelligence, specifically in their ability to emotional recognition. A systematic review of the scientific literature in reference to the emotional recognition of facial expressions in children with ADHD is presented in order to establish or rule the existence of emotional deficits as primary dysfunction in this disorder and, where appropriate, the effect size of the differences against normal development or neurotypical children. The results reveal the recent interest in the issue and the lack of information. Although there is no complete agreement, most of the studies show that emotional recognition of facial expressions is affected in children with ADHD, showing them significantly less accurate than children from control groups in recognizing emotions communicated through facial expressions. A part of these studies make comparisons on the recognition of different discrete emotions; having observed that children with ADHD tend to a greater difficulty recognizing negative emotions, especially anger, fear, and disgust. These results have direct implications for the educational and clinical diagnosis of ADHD; and for the educational intervention for children with ADHD, emotional education might entail an advantageous aid.
Influences on Facial Emotion Recognition in Deaf Children
ERIC Educational Resources Information Center
Sidera, Francesc; Amadó, Anna; Martínez, Laura
2017-01-01
This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…
The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition
ERIC Educational Resources Information Center
Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.
2006-01-01
This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…
Towards Real-Time Speech Emotion Recognition for Affective E-Learning
ERIC Educational Resources Information Center
Bahreini, Kiavash; Nadolski, Rob; Westera, Wim
2016-01-01
This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILTWAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner's vocal intonations and facial expressions in order…
Emotion Recognition Abilities and Empathy of Victims of Bullying
ERIC Educational Resources Information Center
Woods, Sarah; Wolke, Dieter; Nowicki, Stephen; Hall, Lynne
2009-01-01
Objectives: Bullying is a form of systematic abuse by peers with often serious consequences for victims. Few studies have considered the role of emotion recognition abilities and empathic behaviour for different bullying roles. This study investigated physical and relational bullying involvement in relation to basic emotion recognition abilities,…
Facial recognition in education system
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
2017-11-01
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
Lysaker, Paul H; Leonhardt, Bethany L; Brüne, Martin; Buck, Kelly D; James, Alison; Vohs, Jenifer; Francis, Michael; Hamm, Jay A; Salvatore, Giampaolo; Ringer, Jamie M; Dimaggio, Giancarlo
2014-09-30
While many with schizophrenia spectrum disorders experience difficulties understanding the feelings of others, little is known about the psychological antecedents of these deficits. To explore these issues we examined whether deficits in mental state decoding, mental state reasoning and metacognitive capacity predict performance on an emotion recognition task. Participants were 115 adults with a schizophrenia spectrum disorder and 58 adults with substance use disorders but no history of a diagnosis of psychosis who completed the Eyes and Hinting Test. Metacognitive capacity was assessed using the Metacognitive Assessment Scale Abbreviated and emotion recognition was assessed using the Bell Lysaker Emotion Recognition Test. Results revealed that the schizophrenia patients performed more poorly than controls on tests of emotion recognition, mental state decoding, mental state reasoning and metacognition. Lesser capacities for mental state decoding, mental state reasoning and metacognition were all uniquely related emotion recognition within the schizophrenia group even after controlling for neurocognition and symptoms in a stepwise multiple regression. Results suggest that deficits in emotion recognition in schizophrenia may partly result from a combination of impairments in the ability to judge the cognitive and affective states of others and difficulties forming complex representations of self and others. Published by Elsevier Ireland Ltd.
Basic and complex emotion recognition in children with autism: cross-cultural findings.
Fridenson-Hayo, Shimrit; Berggren, Steve; Lassalle, Amandine; Tal, Shahar; Pigat, Delia; Bölte, Sven; Baron-Cohen, Simon; Golan, Ofer
2016-01-01
Children with autism spectrum conditions (ASC) have emotion recognition deficits when tested in different expression modalities (face, voice, body). However, these findings usually focus on basic emotions, using one or two expression modalities. In addition, cultural similarities and differences in emotion recognition patterns in children with ASC have not been explored before. The current study examined the similarities and differences in the recognition of basic and complex emotions by children with ASC and typically developing (TD) controls across three cultures: Israel, Britain, and Sweden. Fifty-five children with high-functioning ASC, aged 5-9, were compared to 58 TD children. On each site, groups were matched on age, sex, and IQ. Children were tested using four tasks, examining recognition of basic and complex emotions from voice recordings, videos of facial and bodily expressions, and emotional video scenarios including all modalities in context. Compared to their TD peers, children with ASC showed emotion recognition deficits in both basic and complex emotions on all three modalities and their integration in context. Complex emotions were harder to recognize, compared to basic emotions for the entire sample. Cross-cultural agreement was found for all major findings, with minor deviations on the face and body tasks. Our findings highlight the multimodal nature of ER deficits in ASC, which exist for basic as well as complex emotions and are relatively stable cross-culturally. Cross-cultural research has the potential to reveal both autism-specific universal deficits and the role that specific cultures play in the way empathy operates in different countries.
Whipple, Christina M; Gfeller, Kate; Driscoll, Virginia; Oleson, Jacob; McGregor, Karla
2015-01-01
Effective musical communication requires conveyance of the intended message in a manner perceptible to the receiver. Communication disorders that impair transmitting or decoding of structural features of music (e.g., pitch, timbre) and/or symbolic representation may result in atypical musical communication, which can have a negative impact on music therapy interventions. This study compared recognition of symbolic representation of emotions or movements in music by two groups of children with different communicative characteristics: severe to profound hearing loss (using cochlear implants [CI]) and autism spectrum disorder (ASD). Their responses were compared to those of children with typical-development and normal hearing (TD-NH). Accuracy was examined as a function of communicative status, emotional or movement category, and individual characteristics. Participants listened to recorded musical excerpts conveying emotions or movements and matched them with labels. Measures relevant to auditory and/or language function were also gathered. There was no significant difference between the ASD and TD-NH groups in identification of musical emotions or movements. However, the CI group was significantly less accurate than the other two groups in identification of both emotions and movements. Mixed effects logistic regression revealed different patterns of accuracy for specific emotions as a function of group. Conveyance of emotions or movements through music may be decoded differently by persons with different types of communication disorders. Because music is the primary therapeutic tool in music therapy sessions, clinicians should consider these differential abilities when selecting music for clinical interventions focusing on emotions or movement. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie
2015-03-01
Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Hindocha, Chandni; Freeman, Tom P.; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K.; Morgan, Celia J.A.; Curran, H. Valerie
2015-01-01
Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8 mg), CBD (16 mg), THC+CBD (8 mg+16 mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling ‘stoned’ was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being ‘stoned’. CBD did not influence feelings of ‘stoned’. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. PMID:25534187
Emotion recognition ability in mothers at high and low risk for child physical abuse.
Balge, K A; Milner, J S
2000-10-01
The study sought to determine if high-risk, compared to low-risk, mothers make more emotion recognition errors when they attempt to recognize emotions in children and adults. Thirty-two demographically matched high-risk (n = 16) and low-risk (n = 16) mothers were asked to identify different emotions expressed by children and adults. Sets of high- and low-intensity, visual and auditory emotions were presented. Mothers also completed measures of stress, depression, and ego-strength. High-risk, compared to low-risk, mothers showed a tendency to make more errors on the visual and auditory emotion recognition tasks, with a trend toward more errors on the low-intensity, visual stimuli. However, the observed trends were not significant. Only a post-hoc test of error rates across all stimuli indicated that high-risk, compared to low-risk, mothers made significantly more emotion recognition errors. Although situational stress differences were not found, high-risk mothers reported significantly higher levels of general parenting stress and depression and lower levels of ego-strength. Since only trends and a significant post hoc finding of more overall emotion recognition errors in high-risk mothers were observed, additional research is needed to determine if high-risk mothers have emotion recognition deficits that may impact parent-child interactions. As in prior research, the study found that high-risk mothers reported more parenting stress and depression and less ego-strength.
Bihippocampal damage with emotional dysfunction: impaired auditory recognition of fear.
Ghika-Schmid, F; Ghika, J; Vuilleumier, P; Assal, G; Vuadens, P; Scherer, K; Maeder, P; Uske, A; Bogousslavsky, J
1997-01-01
A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.
Recognition profile of emotions in natural and virtual faces.
Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Ruben C; Gur, Rurben C; Mathiak, Klaus
2008-01-01
Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.
Recognition Profile of Emotions in Natural and Virtual Faces
Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Rurben C.; Mathiak, Klaus
2008-01-01
Background Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Methodology/Principal Findings Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Conclusions/Significance Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications. PMID:18985152
Embodied emotion impairment in Huntington's Disease.
Trinkler, Iris; Devignevielle, Sévérine; Achaibou, Amal; Ligneul, Romain V; Brugières, Pierre; Cleret de Langavant, Laurent; De Gelder, Beatrice; Scahill, Rachael; Schwartz, Sophie; Bachoud-Lévi, Anne-Catherine
2017-07-01
Theories of embodied cognition suggest that perceiving an emotion involves somatovisceral and motoric re-experiencing. Here we suggest taking such an embodied stance when looking at emotion processing deficits in patients with Huntington's Disease (HD), a neurodegenerative motor disorder. The literature on these patients' emotion recognition deficit has recently been enriched by some reports of impaired emotion expression. The goal of the study was to find out if expression deficits might be linked to a more motoric level of impairment. We used electromyography (EMG) to compare voluntary emotion expression from words to emotion imitation from static face images, and spontaneous emotion mimicry in 28 HD patients and 24 matched controls. For the latter two imitation conditions, an underlying emotion understanding is not imperative (even though performance might be helped by it). EMG measures were compared to emotion recognition and to the capacity to identify and describe emotions using alexithymia questionnaires. Alexithymia questionnaires tap into the more somato-visceral or interoceptive aspects of emotion perception. Furthermore, we correlated patients' expression and recognition scores to cerebral grey matter volume using voxel-based morphometry (VBM). EMG results replicated impaired voluntary emotion expression in HD. Critically, voluntary imitation and spontaneous mimicry were equally impaired and correlated with impaired recognition. By contrast, alexithymia scores were normal, suggesting that emotion representations on the level of internal experience might be spared. Recognition correlated with brain volume in the caudate as well as in areas previously associated with shared action representations, namely somatosensory, posterior parietal, posterior superior temporal sulcus (pSTS) and subcentral sulcus. Together, these findings indicate that in these patients emotion deficits might be tied to the "motoric level" of emotion expression. Such a double-sided recognition and expression impairment may have important consequences, interrupting empathy in nonverbal communication both ways (understanding and being understood), independently of intact internal experience of emotion. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Relationships among Facial Emotion Recognition, Social Skills, and Quality of Life.
ERIC Educational Resources Information Center
Simon, Elliott W.; And Others
1995-01-01
Forty-six institutionalized adults with mild or moderate mental retardation were administered the Vineland Adaptive Behavior Scales (socialization domain), a subjective measure of quality of life, and a facial emotion recognition test. Facial emotion recognition, quality of life, and social skills appeared to be independent of one another. Facial…
Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition
ERIC Educational Resources Information Center
Freitag, Claudia; Schwarzer, Gudrun
2011-01-01
Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…
REM sleep and emotional face memory in typically-developing children and children with autism.
Tessier, Sophie; Lambert, Andréane; Scherzer, Peter; Jemel, Boutheina; Godbout, Roger
2015-09-01
Relationship between REM sleep and memory was assessed in 13 neurotypical and 13 children with Autistic Spectrum Disorder (ASD). A neutral/positive/negative face recognition task was administered the evening before (learning and immediate recognition) and the morning after (delayed recognition) sleep. The number of rapid eye movements (REMs), beta and theta EEG activity over the visual areas were measured during REM sleep. Compared to neurotypical children, children with ASD showed more theta activity and longer reaction time (RT) for correct responses in delayed recognition of neutral faces. Both groups showed a positive correlation between sleep and performance but different patterns emerged: in neurotypical children, accuracy for recalling neutral faces and overall RT improvement overnight was correlated with EEG activity and REMs; in children with ASD, overnight RT improvement for positive and negative faces correlated with theta and beta activity, respectively. These results suggest that neurotypical and children with ASD use different sleep-related brain networks to process faces. Copyright © 2015 Elsevier B.V. All rights reserved.
Preti, Emanuele; Richetin, Juliette; Suttora, Chiara; Pisani, Alberto
2016-04-30
Dysfunctions in social cognition characterize personality disorders. However, mixed results emerged from literature on emotion processing. Borderline Personality Disorder (BPD) traits are either associated with enhanced emotion recognition, impairments, or equal functioning compared to controls. These apparent contradictions might result from the complexity of emotion recognition tasks used and from individual differences in impulsivity and effortful control. We conducted a study in a sample of undergraduate students (n=80), assessing BPD traits, using an emotion recognition task that requires the processing of only visual information or both visual and acoustic information. We also measured individual differences in impulsivity and effortful control. Results demonstrated the moderating role of some components of impulsivity and effortful control on the capability of BPD traits in predicting anger and happiness recognition. We organized the discussion around the interaction between different components of regulatory functioning and task complexity for a better understanding of emotion recognition in BPD samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C
2014-01-01
The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS. PMID:24067301
Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C
2014-02-01
The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.
Assessment of Emotional Experience and Emotional Recognition in Complicated Grief
Fernández-Alcántara, Manuel; Cruz-Quintana, Francisco; Pérez-Marfil, M. N.; Catena-Martínez, Andrés; Pérez-García, Miguel; Turnbull, Oliver H.
2016-01-01
There is substantial evidence of bias in the processing of emotion in people with complicated grief (CG). Previous studies have tended to assess the expression of emotion in CG, but other aspects of emotion (mainly emotion recognition, and the subjective aspects of emotion) have not been addressed, despite their importance for practicing clinicians. A quasi-experimental design with two matched groups (Complicated Grief, N = 24 and Non-Complicated Grief, N = 20) was carried out. The Facial Expression of Emotion Test (emotion recognition), a set of pictures from the International Affective Picture System (subjective experience of emotion) and the Symptom Checklist 90 Revised (psychopathology) were employed. The CG group showed lower scores on the dimension of valence for specific conditions on the IAPS, related to the subjective experience of emotion. In addition, they presented higher values of psychopathology. In contrast, statistically significant results were not found for the recognition of emotion. In conclusion, from a neuropsychological point of view, the subjective aspects of emotion and psychopathology seem central in explaining the experience of those with CG. These results are clinically significant for psychotherapists and psychoanalysts working in the field of grief and loss. PMID:26903928
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia
An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
Recognizing Induced Emotions of Happiness and Sadness from Dance Movement
Van Dyck, Edith; Vansteenkiste, Pieter; Lenoir, Matthieu; Lesaffre, Micheline; Leman, Marc
2014-01-01
Recent research revealed that emotional content can be successfully decoded from human dance movement. Most previous studies made use of videos of actors or dancers portraying emotions through choreography. The current study applies emotion induction techniques and free movement in order to examine the recognition of emotional content from dance. Observers (N = 30) watched a set of silent videos showing depersonalized avatars of dancers moving to an emotionally neutral musical stimulus after emotions of either sadness or happiness had been induced. Each of the video clips consisted of two dance performances which were presented side-by-side and were played simultaneously; one of a dancer in the happy condition and one of the same individual in the sad condition. After every film clip, the observers were asked to make forced-choices concerning the emotional state of the dancer. Results revealed that observers were able to identify the emotional state of the dancers with a high degree of accuracy. Moreover, emotions were more often recognized for female dancers than for their male counterparts. In addition, the results of eye tracking measurements unveiled that observers primarily focus on movements of the chest when decoding emotional information from dance movement. The findings of our study show that not merely portrayed emotions, but also induced emotions can be successfully recognized from free dance movement. PMID:24587026
Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-01-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446
Castro-Vale, Ivone; Severo, Milton; Carvalho, Davide; Mota-Cardoso, Rui
2015-01-01
Emotion recognition is very important for social interaction. Several mental disorders influence facial emotion recognition. War veterans and their offspring are subject to an increased risk of developing psychopathology. Emotion recognition is an important aspect that needs to be addressed in this population. To our knowledge, no test exists that is validated for use with war veterans and their offspring. The current study aimed to validate the JACFEE photo set to study facial emotion recognition in war veterans and their offspring. The JACFEE photo set was presented to 135 participants, comprised of 62 male war veterans and 73 war veterans' offspring. The participants identified the facial emotion presented from amongst the possible seven emotions that were tested for: anger, contempt, disgust, fear, happiness, sadness, and surprise. A loglinear model was used to evaluate whether the agreement between the intended and the chosen emotions was higher than the expected. Overall agreement between chosen and intended emotions was 76.3% (Cohen kappa = 0.72). The agreement ranged from 63% (sadness expressions) to 91% (happiness expressions). The reliability by emotion ranged from 0.617 to 0.843 and the overall JACFEE photo set Cronbach alpha was 0.911. The offspring showed higher agreement when compared with the veterans (RR: 41.52 vs 12.12, p < 0.001), which confirms the construct validity of the test. The JACFEE set of photos showed good validity and reliability indices, which makes it an adequate instrument for researching emotion recognition ability in the study sample of war veterans and their respective offspring.
Castro-Vale, Ivone; Severo, Milton; Carvalho, Davide; Mota-Cardoso, Rui
2015-01-01
Emotion recognition is very important for social interaction. Several mental disorders influence facial emotion recognition. War veterans and their offspring are subject to an increased risk of developing psychopathology. Emotion recognition is an important aspect that needs to be addressed in this population. To our knowledge, no test exists that is validated for use with war veterans and their offspring. The current study aimed to validate the JACFEE photo set to study facial emotion recognition in war veterans and their offspring. The JACFEE photo set was presented to 135 participants, comprised of 62 male war veterans and 73 war veterans’ offspring. The participants identified the facial emotion presented from amongst the possible seven emotions that were tested for: anger, contempt, disgust, fear, happiness, sadness, and surprise. A loglinear model was used to evaluate whether the agreement between the intended and the chosen emotions was higher than the expected. Overall agreement between chosen and intended emotions was 76.3% (Cohen kappa = 0.72). The agreement ranged from 63% (sadness expressions) to 91% (happiness expressions). The reliability by emotion ranged from 0.617 to 0.843 and the overall JACFEE photo set Cronbach alpha was 0.911. The offspring showed higher agreement when compared with the veterans (RR: 41.52 vs 12.12, p < 0.001), which confirms the construct validity of the test. The JACFEE set of photos showed good validity and reliability indices, which makes it an adequate instrument for researching emotion recognition ability in the study sample of war veterans and their respective offspring. PMID:26147938
Valence and the development of immediate and long-term false memory illusions.
Howe, Mark L; Candel, Ingrid; Otgaar, Henry; Malone, Catherine; Wimmer, Marina C
2010-01-01
Across five experiments we examined the role of valence in children's and adults' true and false memories. Using the Deese/Roediger-McDermott paradigm and either neutral or negative-emotional lists, both adults' (Experiment 1) and children's (Experiment 2) true recall and recognition was better for neutral than negative items, and although false recall was also higher for neutral items, false recognition was higher for negative items. The last three experiments examined adults' (Experiment 3) and children's (Experiments 4 and 5) 1-week long-term recognition of neutral and negative-emotional information. The results replicated the immediate recall and recognition findings from the first two experiments. More important, these experiments showed that although true recognition decreased over the 1-week interval, false recognition of neutral items remained unchanged whereas false recognition of negative-emotional items increased. These findings are discussed in terms of theories of emotion and memory as well as their forensic implications.
Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.
2014-01-01
Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469
Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S
2013-12-01
In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.
Gender Differences in the Recognition of Vocal Emotions
Lausen, Adi; Schacht, Annekathrin
2018-01-01
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody. PMID:29922202
Mohan, S N; Mukhtar, F; Jobson, L
2016-10-21
Depression is a mood disorder that affects a significant proportion of the population worldwide. In Malaysia and Australia, the number of people diagnosed with depression is on the rise. It has been found that impairments in emotion processing and emotion regulation play a role in the development and maintenance of depression. This study is based on Matsumoto and Hwang's biocultural model of emotion and Triandis' Subjective Culture model. It aims to investigate the influence of culture on emotion processing among Malaysians and Australians with and without major depressive disorder (MDD). This study will adopt a between-group design. Participants will include Malaysian Malays and Caucasian Australians with and without MDD (N=320). There will be four tasks involved in this study, namely: (1) the facial emotion recognition task, (2) the biological motion task, (3) the subjective experience task and (4) the emotion meaning task. It is hypothesised that there will be cultural differences in how participants with and without MDD respond to these emotion tasks and that, pan-culturally, MDD will influence accuracy rates in the facial emotion recognition task and the biological motion task. This study is approved by the Universiti Putra Malaysia Research Ethics Committee (JKEUPM) and the Monash University Human Research Ethics Committee (MUHREC). Permission to conduct the study has also been obtained from the National Medical Research Register (NMRR; NMRR-15-2314-26919). On completion of the study, data will be kept by Universiti Putra Malaysia for a specific period of time before they are destroyed. Data will be published in a collective manner in the form of journal articles with no reference to a specific individual. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Improvement of emotional healthcare system with stress detection from ECG signal.
Tivatansakul, S; Ohkura, M
2015-01-01
Our emotional healthcare system is designed to cope with users' negative emotions in daily life. To make the system more intelligent, we integrated emotion recognition by facial expression to provide appropriate services based on user's current emotional state. Our emotion recognition by facial expression has confusion issue to recognize some positive, neutral and negative emotions that make the emotional healthcare system provide a relaxation service even though users don't have negative emotions. Therefore, to increase the effectiveness of the system to provide the relaxation service, we integrate stress detection from ECG signal. The stress detection might be able to address the confusion issue of emotion recognition by facial expression to provide the service. Indeed, our results show that integration of stress detection increases the effectiveness and efficiency of the emotional healthcare system to provide services.
Emotional memory for musical excerpts in young and older adults
Alonso, Irene; Dellacherie, Delphine; Samson, Séverine
2015-01-01
The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24 h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24 h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and dementia. PMID:25814950
Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?
ERIC Educational Resources Information Center
Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina
2016-01-01
The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…
Biases in facial and vocal emotion recognition in chronic schizophrenia
Dondaine, Thibaut; Robert, Gabriel; Péron, Julie; Grandjean, Didier; Vérin, Marc; Drapier, Dominique; Millet, Bruno
2014-01-01
There has been extensive research on impaired emotion recognition in schizophrenia in the facial and vocal modalities. The literature points to biases toward non-relevant emotions for emotional faces but few studies have examined biases in emotional recognition across different modalities (facial and vocal). In order to test emotion recognition biases, we exposed 23 patients with stabilized chronic schizophrenia and 23 healthy controls (HCs) to emotional facial and vocal tasks asking them to rate emotional intensity on visual analog scales. We showed that patients with schizophrenia provided higher intensity ratings on the non-target scales (e.g., surprise scale for fear stimuli) than HCs for the both tasks. Furthermore, with the exception of neutral vocal stimuli, they provided the same intensity ratings on the target scales as the HCs. These findings suggest that patients with chronic schizophrenia have emotional biases when judging emotional stimuli in the visual and vocal modalities. These biases may stem from a basic sensorial deficit, a high-order cognitive dysfunction, or both. The respective roles of prefrontal-subcortical circuitry and the basal ganglia are discussed. PMID:25202287
Encoding conditions affect recognition of vocally expressed emotions across cultures.
Jürgens, Rebecca; Drolet, Matthis; Pirow, Ralph; Scheiner, Elisabeth; Fischer, Julia
2013-01-01
Although the expression of emotions in humans is considered to be largely universal, cultural effects contribute to both emotion expression and recognition. To disentangle the interplay between these factors, play-acted and authentic (non-instructed) vocal expressions of emotions were used, on the assumption that cultural effects may contribute differentially to the recognition of staged and spontaneous emotions. Speech tokens depicting four emotions (anger, sadness, joy, fear) were obtained from German radio archives and re-enacted by professional actors, and presented to 120 participants from Germany, Romania, and Indonesia. Participants in all three countries were poor at distinguishing between play-acted and spontaneous emotional utterances (58.73% correct on average with only marginal cultural differences). Nevertheless, authenticity influenced emotion recognition: across cultures, anger was recognized more accurately when play-acted (z = 15.06, p < 0.001) and sadness when authentic (z = 6.63, p < 0.001), replicating previous findings from German populations. German subjects revealed a slight advantage in recognizing emotions, indicating a moderate in-group advantage. There was no difference between Romanian and Indonesian subjects in the overall emotion recognition. Differential cultural effects became particularly apparent in terms of differential biases in emotion attribution. While all participants labeled play-acted expressions as anger more frequently than expected, German participants exhibited a further bias toward choosing anger for spontaneous stimuli. In contrast to the German sample, Romanian and Indonesian participants were biased toward choosing sadness. These results support the view that emotion recognition rests on a complex interaction of human universals and cultural specificities. Whether and in which way the observed biases are linked to cultural differences in self-construal remains an issue for further investigation.
Instructions to mimic improve facial emotion recognition in people with sub-clinical autism traits.
Lewis, Michael B; Dunn, Emily
2017-11-01
People tend to mimic the facial expression of others. It has been suggested that this helps provide social glue between affiliated people but it could also aid recognition of emotions through embodied cognition. The degree of facial mimicry, however, varies between individuals and is limited in people with autism spectrum conditions (ASC). The present study sought to investigate the effect of promoting facial mimicry during a facial-emotion-recognition test. In two experiments, participants without an ASC diagnosis had their autism quotient (AQ) measured. Following a baseline test, they did an emotion-recognition test again but half of the participants were asked to mimic the target face they saw prior to making their responses. Mimicry improved emotion recognition, and further analysis revealed that the largest improvement was for participants who had higher scores on the autism traits. In fact, recognition performance was best overall for people who had high AQ scores but also received the instruction to mimic. Implications for people with ASC are explored.
Face recognition: database acquisition, hybrid algorithms, and human studies
NASA Astrophysics Data System (ADS)
Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry
1997-02-01
One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.
Comparison of Classification Methods for Detecting Emotion from Mandarin Speech
NASA Astrophysics Data System (ADS)
Pao, Tsang-Long; Chen, Yu-Te; Yeh, Jun-Heng
It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.
Limbrecht-Ecklundt, Kerstin; Scheck, Andreas; Jerg-Bretzke, Lucia; Walter, Steffen; Hoffmann, Holger; Traue, Harald C.
2013-01-01
Objective: This article includes the examination of potential methodological problems of the application of a forced choice response format in facial emotion recognition. Methodology: 33 subjects were presented with validated facial stimuli. The task was to make a decision about which emotion was shown. In addition, the subjective certainty concerning the decision was recorded. Results: The detection rates are 68% for fear, 81% for sadness, 85% for anger, 87% for surprise, 88% for disgust, and 94% for happiness, and are thus well above the random probability. Conclusion: This study refutes the concern that the use of forced choice formats may not adequately reflect actual recognition performance. The use of standardized tests to examine emotion recognition ability leads to valid results and can be used in different contexts. For example, the images presented here appear suitable for diagnosing deficits in emotion recognition in the context of psychological disorders and for mapping treatment progress. PMID:23798981
Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi
2017-09-01
People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p < 0.05) and no significant changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.
Pan, Jiahui; Xie, Qiuyou; Huang, Haiyun; He, Yanbin; Sun, Yuping; Yu, Ronghao; Li, Yuanqing
2018-01-01
For patients with disorders of consciousness (DOC), such as vegetative state (VS) and minimally conscious state (MCS), detecting and assessing the residual cognitive functions of the brain remain challenging. Emotion-related cognitive functions are difficult to detect in patients with DOC using motor response-based clinical assessment scales such as the Coma Recovery Scale-Revised (CRS-R) because DOC patients have motor impairments and are unable to provide sufficient motor responses for emotion-related communication. In this study, we proposed an EEG-based brain-computer interface (BCI) system for emotion recognition in patients with DOC. Eight patients with DOC (5 VS and 3 MCS) and eight healthy controls participated in the BCI-based experiment. During the experiment, two movie clips flashed (appearing and disappearing) eight times with a random interstimulus interval between flashes to evoke P300 potentials. The subjects were instructed to focus on the crying or laughing movie clip and to count the flashes of the corresponding movie clip cued by instruction. The BCI system performed online P300 detection to determine which movie clip the patients responsed to and presented the result as feedback. Three of the eight patients and all eight healthy controls achieved online accuracies based on P300 detection that were significantly greater than chance level. P300 potentials were observed in the EEG signals from the three patients. These results indicated the three patients had abilities of emotion recognition and command following. Through spectral analysis, common spatial pattern (CSP) and differential entropy (DE) features in the delta, theta, alpha, beta, and gamma frequency bands were employed to classify the EEG signals during the crying and laughing movie clips. Two patients and all eight healthy controls achieved offline accuracies significantly greater than chance levels in the spectral analysis. Furthermore, stable topographic distribution patterns of CSP and DE features were observed in both the healthy subjects and these two patients. Our results suggest that cognitive experiments may be conducted using BCI systems in patients with DOC despite the inability of such patients to provide sufficient behavioral responses. PMID:29867421
Pan, Jiahui; Xie, Qiuyou; Huang, Haiyun; He, Yanbin; Sun, Yuping; Yu, Ronghao; Li, Yuanqing
2018-01-01
For patients with disorders of consciousness (DOC), such as vegetative state (VS) and minimally conscious state (MCS), detecting and assessing the residual cognitive functions of the brain remain challenging. Emotion-related cognitive functions are difficult to detect in patients with DOC using motor response-based clinical assessment scales such as the Coma Recovery Scale-Revised (CRS-R) because DOC patients have motor impairments and are unable to provide sufficient motor responses for emotion-related communication. In this study, we proposed an EEG-based brain-computer interface (BCI) system for emotion recognition in patients with DOC. Eight patients with DOC (5 VS and 3 MCS) and eight healthy controls participated in the BCI-based experiment. During the experiment, two movie clips flashed (appearing and disappearing) eight times with a random interstimulus interval between flashes to evoke P300 potentials. The subjects were instructed to focus on the crying or laughing movie clip and to count the flashes of the corresponding movie clip cued by instruction. The BCI system performed online P300 detection to determine which movie clip the patients responsed to and presented the result as feedback. Three of the eight patients and all eight healthy controls achieved online accuracies based on P300 detection that were significantly greater than chance level. P300 potentials were observed in the EEG signals from the three patients. These results indicated the three patients had abilities of emotion recognition and command following. Through spectral analysis, common spatial pattern (CSP) and differential entropy (DE) features in the delta, theta, alpha, beta, and gamma frequency bands were employed to classify the EEG signals during the crying and laughing movie clips. Two patients and all eight healthy controls achieved offline accuracies significantly greater than chance levels in the spectral analysis. Furthermore, stable topographic distribution patterns of CSP and DE features were observed in both the healthy subjects and these two patients. Our results suggest that cognitive experiments may be conducted using BCI systems in patients with DOC despite the inability of such patients to provide sufficient behavioral responses.
Gender differences in emotion recognition: Impact of sensory modality and emotional category.
Lambrecht, Lena; Kreifelts, Benjamin; Wildgruber, Dirk
2014-04-01
Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose "alluring" more often when a stimulus was presented by a woman as compared to a man.
Gamond, L; Cattaneo, Z
2016-12-01
Consistent evidence suggests that emotional facial expressions are better recognized when the expresser and the perceiver belong to the same social group (in-group advantage). In this study, we used transcranial magnetic stimulation (TMS) to investigate the possible causal involvement of the dorsomedial prefrontal cortex (dmPFC) and of the right temporo-parietal junction (TPJ), two main nodes of the mentalizing neural network, in mediating the in-group advantage in emotion recognition. Participants performed an emotion discrimination task in a minimal (blue/green) group paradigm. We found that interfering with activity in the dmPFC significantly interfered with the effect of minimal group-membership on emotion recognition, reducing participants' ability to discriminate emotions expressed by in-group members. In turn, rTPJ mainly affected emotion discrimination per se, irrespective of group membership. Overall, our results point to a causal role of the dmPFC in mediating the in-group advantage in emotion recognition, favoring intragroup communication. Copyright © 2016 Elsevier Ltd. All rights reserved.
Crossmodal and incremental perception of audiovisual cues to emotional speech.
Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc
2010-01-01
In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: 1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests with video clips of emotional utterances collected via a variant of the well-known Velten method. More specifically, we recorded speakers who displayed positive or negative emotions, which were congruent or incongruent with the (emotional) lexical content of the uttered sentence. In order to test this, we conducted two experiments. The first experiment is a perception experiment in which Czech participants, who do not speak Dutch, rate the perceived emotional state of Dutch speakers in a bimodal (audiovisual) or a unimodal (audio- or vision-only) condition. It was found that incongruent emotional speech leads to significantly more extreme perceived emotion scores than congruent emotional speech, where the difference between congruent and incongruent emotional speech is larger for the negative than for the positive conditions. Interestingly, the largest overall differences between congruent and incongruent emotions were found for the audio-only condition, which suggests that posing an incongruent emotion has a particularly strong effect on the spoken realization of emotions. The second experiment uses a gating paradigm to test the recognition speed for various emotional expressions from a speaker's face. In this experiment participants were presented with the same clips as experiment I, but this time presented vision-only. The clips were shown in successive segments (gates) of increasing duration. Results show that participants are surprisingly accurate in their recognition of the various emotions, as they already reach high recognition scores in the first gate (after only 160 ms). Interestingly, the recognition scores raise faster for positive than negative conditions. Finally, the gating results suggest that incongruent emotions are perceived as more intense than congruent emotions, as the former get more extreme recognition scores than the latter, already after a short period of exposure.
Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F
2012-06-01
The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the attribution of mental states. Our new result concerned the demonstration that the performances in the recognition of facial emotions are the best predictor of the performances in the attribution of beliefs. With Marshall et al.'s model on empathy, we can explain this link between the recognition of facial emotions and the comprehension of beliefs. Copyright © 2011 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine
NASA Astrophysics Data System (ADS)
Lawi, Armin; Sya'Rani Machrizzandi, M.
2018-03-01
Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.
Bi, Kun; Chattun, Mahammad Ridwan; Liu, Xiaoxue; Wang, Qiang; Tian, Shui; Zhang, Siqi; Lu, Qing; Yao, Zhijian
2018-06-13
The functional networks are associated with emotional processing in depression. The mapping of dynamic spatio-temporal brain networks is used to explore individual performance during early negative emotional processing. However, the dysfunctions of functional networks in low gamma band and their discriminative potentialities during early period of emotional face processing remain to be explored. Functional brain networks were constructed from the MEG recordings of 54 depressed patients and 54 controls in low gamma band (30-48 Hz). Dynamic connectivity regression (DCR) algorithm analyzed the individual change points of time series in response to emotional stimuli and constructed individualized spatio-temporal patterns. The nodal characteristics of patterns were calculated and fed into support vector machine (SVM). Performance of the classification algorithm in low gamma band was validated by dynamic topological characteristics of individual patterns in comparison to alpha and beta band. The best discrimination accuracy of individual spatio-temporal patterns was 91.01% in low gamma band. Individual temporal patterns had better results compared to group-averaged temporal patterns in all bands. The most important discriminative networks included affective network (AN) and fronto-parietal network (FPN) in low gamma band. The sample size is relatively small. High gamma band was not considered. The abnormal dynamic functional networks in low gamma band during early emotion processing enabled depression recognition. The individual information processing is crucial in the discovery of abnormal spatio-temporal patterns in depression during early negative emotional processing. Individual spatio-temporal patterns may reflect the real dynamic function of subjects while group-averaged data may neglect some individual information. Copyright © 2018. Published by Elsevier B.V.
How Deep Neural Networks Can Improve Emotion Recognition on Video Data
2016-09-25
HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA Pooya Khorrami1 , Tom Le Paine1, Kevin Brady2, Charlie Dagli2, Thomas S...this work, we present a system that per- forms emotion recognition on video data using both con- volutional neural networks (CNNs) and recurrent...neural net- works (RNNs). We present our findings on videos from the Audio/Visual+Emotion Challenge (AV+EC2015). In our experiments, we analyze the effects
A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders
Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence
2015-01-01
Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928
Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.
Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J
2012-11-01
Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.
Emotion recognition based on multiple order features using fractional Fourier transform
NASA Astrophysics Data System (ADS)
Ren, Bo; Liu, Deyin; Qi, Lin
2017-07-01
In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.
Assessing collective affect recognition via the Emotional Aperture Measure.
Sanchez-Burks, Jeffrey; Bartel, Caroline A; Rees, Laura; Huy, Quy
2016-01-01
Curiosity about collective affect is undergoing a revival in many fields. This literature, tracing back to Le Bon's seminal work on crowd psychology, has established the veracity of collective affect and demonstrated its influence on a wide range of group dynamics. More recently, an interest in the perception of collective affect has emerged, revealing a need for a methodological approach for assessing collective emotion recognition to complement measures of individual emotion recognition. This article addresses this need by introducing the Emotional Aperture Measure (EAM). Three studies provide evidence that collective affect recognition requires a processing style distinct from individual emotion recognition and establishes the validity and reliability of the EAM. A sample of working managers further shows how the EAM provides unique insights into how individuals interact with collectives. We discuss how the EAM can advance several lines of research on collective affect.
Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.
Wieckowski, Andrea Trubanova; White, Susan W
2017-01-01
Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.
Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean
2017-11-01
People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.
An audiovisual emotion recognition system
NASA Astrophysics Data System (ADS)
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
2007-12-01
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
Baran Tatar, Zeynep; Yargıç, İlhan; Oflaz, Serap; Büyükgök, Deniz
2015-01-01
Interpersonal relationship disorders in adults with Attention Deficit Hyperactivity Disorder (ADHD) can be associated with the impairment of non-verbal communication. The purpose of our study was to compare the emotion recognition, facial recognition and neuropsychological assessments of adult ADHD patients with those of healthy controls, and to thus determine the effect of neuropsychological data on the recognition of emotional expressions. This study, which was based on a case-control model, was conducted with patients diagnosed with ADHD according to the DSM-IV-TR, being followed and monitored at the adult ADHD clinic of the Psychiatry Department of the Istanbul University Istanbul Medical Faculty Hospital. The study group consisted of 40 adults (27.5% female) between the ages of 20-65 (mean age 25.96 ± 6.07; education level: 15.02±2.34 years) diagnosed with ADHD, and 40 controls who were matched/similar with the study group with respect to age, gender, and education level. In the ADHD group, 14 (35%) of the patients had concomitant diseases. Pictures of Facial Affect, the Benton Face Recognition Test, and the Continuous Performance Test were used to respectively evaluate emotion recognition, facial recognition, and attention deficit and impulsivity of the patients. It was determined that, in comparison to the control group, the ADHD group made more mistakes in recognizing all types of emotional expressions and neutral expressions. The ADHD group also demonstrated more cognitive mistakes. Facial recognition was similar in both groups. It was determined that impulsivity had a significant effect on facial recognition. The social relationship disorders observed in ADHD can be affected by emotion recognition processes. In future studies, it may be possible to investigate the effects that early psychopharmacological and psychotherapeutic interventions administered for the main symptoms of ADHD have on the impairment of emotion recognition.
Drapier, D; Péron, J; Leray, E; Sauleau, P; Biseul, I; Drapier, S; Le Jeune, F; Travers, D; Bourguignon, A; Haegelen, C; Millet, B; Vérin, M
2008-09-01
To test the hypothesis that emotion recognition and apathy share the same functional circuit involving the subthalamic nucleus (STN). A consecutive series of 17 patients with advanced Parkinson's disease (PD) was assessed 3 months before (M-3) and 3 months (M+3) after STN deep brain stimulation (DBS). Mean (+/-S.D.) age at surgery was 56.9 (8.7) years. Mean disease duration at surgery was 11.8 (2.6) years. Apathy was measured using the Apathy Evaluation Scale (AES) at both M-3 and M3. Patients were also assessed using a computerised paradigm of facial emotion recognition [Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto: Consulting Psychologist Press] before and after STN DBS. Prior to this, the Benton Facial Recognition Test was used to check that the ability to perceive faces was intact. Apathy had significantly worsened at M3 (42.5+/-8.9, p=0.006) after STN-DBS, in relation to the preoperative assessment (37.2+/-5.5). There was also a significant reduction in recognition percentages for facial expressions of fear (43.1%+/-22.9 vs. 61.6%+/-21.4, p=0.022) and sadness (52.7%+/-19.1 vs. 67.6%+/-22.8, p=0.031) after STN DBS. However, the postoperative worsening of apathy and emotion recognition impairment were not correlated. Our results confirm that the STN is involved in both the apathy and emotion recognition networks. However, the absence of any correlation between apathy and emotion recognition impairment suggests that the worsening of apathy following surgery could not be explained by a lack of facial emotion recognition and that its behavioural and cognitive components should therefore also be taken into consideration.
Emotional Recognition in Autism Spectrum Conditions from Voices and Faces
ERIC Educational Resources Information Center
Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne
2013-01-01
The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…
The Primacy of Perceiving: Emotion Recognition Buffers Negative Effects of Emotional Labor
ERIC Educational Resources Information Center
Bechtoldt, Myriam N.; Rohrmann, Sonja; De Pater, Irene E.; Beersma, Bianca
2011-01-01
There is ample empirical evidence for negative effects of emotional labor (surface acting and deep acting) on workers' well-being. This study analyzed to what extent workers' ability to recognize others' emotions may buffer these effects. In a 4-week study with 85 nurses and police officers, emotion recognition moderated the relationship between…
Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-04-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-01-01
Background Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Methods Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Results Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = −0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = −0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Conclusions Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns. PMID:25642389
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-02-01
Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = -0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = -0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns.
Marx, Ivo; Krause, John; Berger, Christoph; Häßler, Frank
2014-01-01
Objectives To effectively manage current task demands, attention must be focused on task-relevant information while task-irrelevant information is rejected. However, in everyday life, people must cope with emotions, which may interfere with actual task demands and may challenge functional attention allocation. Control of interfering emotions has been associated with the proper functioning of the dorsolateral prefrontal cortex (DLPFC). As DLPFC dysfunction is evident in subjects with ADHD and in subjects with alcohol dependence, the current study sought to examine the bottom-up effect of emotional distraction on task performance in both disorders. Methods Male adults with ADHD (n = 22), male adults with alcohol dependence (n = 16), and healthy controls (n = 30) performed an emotional working memory task (n-back task). In the background of the task, we presented neutral and negative stimuli that varied in emotional saliency. Results In both clinical groups, a working memory deficit was evident. Moreover, both clinical groups displayed deficient emotional interference control. The n-back performance of the controls was not affected by the emotional distractors, whereas that of subjects with ADHD deteriorated in the presence of low salient distractors, and that of alcoholics did not deteriorate until high salient distractors were presented. Subsequent to task performance, subjects with ADHD accurately recognized more distractors than did alcoholics and controls. In alcoholics, picture recognition accuracy was negatively associated with n-back performance, suggesting a functional association between the ability to suppress emotional distractors and successful task performance. In subjects with ADHD, performance accuracy was negatively associated with ADHD inattentive symptoms, suggesting that inattention contributes to the performance deficit. Conclusions Subjects with ADHD and alcoholics both display an emotional interference control deficit, which is especially pronounced in subjects with ADHD. Beyond dysfunctional attention allocation processes, a more general attention deficit seems to contribute to the more pronounced performance deficit pattern in ADHD. PMID:25265290
Emotion Recognition in Fathers and Mothers at High-Risk for Child Physical Abuse
ERIC Educational Resources Information Center
Asla, Nagore; de Paul, Joaquin; Perez-Albeniz, Alicia
2011-01-01
Objective: The present study was designed to determine whether parents at high risk for physical child abuse, in comparison with parents at low risk, show deficits in emotion recognition, as well as to examine the moderator effect of gender and stress on the relationship between risk for physical child abuse and emotion recognition. Methods: Based…
ERIC Educational Resources Information Center
Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.
2013-01-01
Research Findings: The study examined children's recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children ("N" = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills…
ERIC Educational Resources Information Center
Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine
2008-01-01
We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…
ERIC Educational Resources Information Center
Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.
2013-01-01
The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…
Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike
2016-02-01
Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed. © 2015 The British Psychological Society.
Canli, Derya; Ozdemir, Hatice; Kocak, Orhan Murat
2015-08-01
Studies provide evidence for impaired social cognition in schizotypy and its association with negative symptoms. Cognitive features related to magical ideation - a component of the positive dimension of schizotypy - have been less investigated. We aimed to assess social cognitive functioning among adolescents with high magical ideation scores, mainly focusing on face and emotion recognition. 22 subjects with magical ideation scale scores above the cut off level and 22 controls with lowest scores from among 250 students screened with this scale were included in the study. A face and emotion recognition n-back test, the empathy quotient, theory of mind tests and the Physical Anhedonia Scale were applied to both magical ideation and control groups. The magical ideation group performed significantly worse than controls on both face and emotion recognition tests. Emotion recognition performance was found to be affected by memory load, with sadness, among emotions, revealing a difference between the two groups. Empathy and theory of mind tests did not distinguish the magical ideation group from controls. Our findings provide evidence for a deficit in negative emotion recognition affected by memory load associated with magical ideation in adolescents. Copyright © 2015 Elsevier Inc. All rights reserved.
On the Time Course of Vocal Emotion Recognition
Pell, Marc D.; Kotz, Sonja A.
2011-01-01
How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing. PMID:22087275
Facial emotion recognition is inversely correlated with tremor severity in essential tremor.
Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G
2014-04-01
We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.
Decoding facial expressions based on face-selective and motion-sensitive areas.
Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin
2017-06-01
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Stability of facial emotion recognition performance in bipolar disorder.
Martino, Diego J; Samamé, Cecilia; Strejilevich, Sergio A
2016-09-30
The aim of this study was to assess the performance in emotional processing over time in a sample of euthymic patients with bipolar disorder (BD). Performance in the facial recognition of the six basic emotions (surprise, anger, sadness, happiness, disgust, and fear) did not change during a follow-up period of almost 7 years. These preliminary results suggest that performance in facial emotion recognition might be stable over time in BD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
2015-05-28
recognition is simpler and requires less computational resources compared to other inputs such as facial expressions . The Berlin database of Emotional ...Processing Magazine, IEEE, vol. 18, no. 1, pp. 32– 80, 2001. [15] K. R. Scherer, T. Johnstone, and G. Klasmeyer, “Vocal expression of emotion ...Network for Real-Time Speech- Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q
Emotional recognition from the speech signal for a virtual education agent
NASA Astrophysics Data System (ADS)
Tickle, A.; Raghu, S.; Elshaw, M.
2013-06-01
This paper explores the extraction of features from the speech wave to perform intelligent emotion recognition. A feature extract tool (openSmile) was used to obtain a baseline set of 998 acoustic features from a set of emotional speech recordings from a microphone. The initial features were reduced to the most important ones so recognition of emotions using a supervised neural network could be performed. Given that the future use of virtual education agents lies with making the agents more interactive, developing agents with the capability to recognise and adapt to the emotional state of humans is an important step.
Oerlemans, Anoek M; van der Meer, Jolanda M J; van Steijn, Daphne J; de Ruiter, Saskia W; de Bruijn, Yvette G E; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J
2014-05-01
Autism is a highly heritable and clinically heterogeneous neuropsychiatric disorder that frequently co-occurs with other psychopathologies, such as attention-deficit/hyperactivity disorder (ADHD). An approach to parse heterogeneity is by forming more homogeneous subgroups of autism spectrum disorder (ASD) patients based on their underlying, heritable cognitive vulnerabilities (endophenotypes). Emotion recognition is a likely endophenotypic candidate for ASD and possibly for ADHD. Therefore, this study aimed to examine whether emotion recognition is a viable endophenotypic candidate for ASD and to assess the impact of comorbid ADHD in this context. A total of 90 children with ASD (43 with and 47 without ADHD), 79 ASD unaffected siblings, and 139 controls aged 6-13 years, were included to test recognition of facial emotion and affective prosody. Our results revealed that the recognition of both facial emotion and affective prosody was impaired in children with ASD and aggravated by the presence of ADHD. The latter could only be partly explained by typical ADHD cognitive deficits, such as inhibitory and attentional problems. The performance of unaffected siblings could overall be considered at an intermediate level, performing somewhat worse than the controls and better than the ASD probands. Our findings suggest that emotion recognition might be a viable endophenotype in ASD and a fruitful target in future family studies of the genetic contribution to ASD and comorbid ADHD. Furthermore, our results suggest that children with comorbid ASD and ADHD are at highest risk for emotion recognition problems.
Parents' Emotion-Related Beliefs, Behaviours, and Skills Predict Children's Recognition of Emotion
ERIC Educational Resources Information Center
Castro, Vanessa L.; Halberstadt, Amy G.; Lozada, Fantasy T.; Craig, Ashley B.
2015-01-01
Children who are able to recognize others' emotions are successful in a variety of socioemotional domains, yet we know little about how school-aged children's abilities develop, particularly in the family context. We hypothesized that children develop emotion recognition skill as a function of parents' own emotion-related beliefs,…
ERIC Educational Resources Information Center
Golan, Ofer; Ashwin, Emma; Granader, Yael; McClintock, Suzy; Day, Kate; Leggett, Victoria; Baron-Cohen, Simon
2010-01-01
This study evaluated "The Transporters", an animated series designed to enhance emotion comprehension in children with autism spectrum conditions (ASC). n = 20 children with ASC (aged 4-7) watched "The Transporters" everyday for 4 weeks. Participants were tested before and after intervention on emotional vocabulary and emotion recognition at three…
ERIC Educational Resources Information Center
Williams, Beth T.; Gray, Kylie M.; Tonge, Bruce J.
2012-01-01
Background: Children with autism have difficulties in emotion recognition and a number of interventions have been designed to target these problems. However, few emotion training interventions have been trialled with young children with autism and co-morbid ID. This study aimed to evaluate the efficacy of an emotion training programme for a group…
A voxel-based lesion study on facial emotion recognition after penetrating brain injury
Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan
2013-01-01
The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440
Castagna, Filomena; Montemagni, Cristiana; Maria Milani, Anna; Rocca, Giuseppe; Rocca, Paola; Casacchia, Massimo; Bogetto, Filippo
2013-02-28
This study aimed to evaluate the ability to decode emotion in the auditory and audiovisual modality in a group of patients with schizophrenia, and to explore the role of cognition and psychopathology in affecting these emotion recognition abilities. Ninety-four outpatients in a stable phase and 51 healthy subjects were recruited. Patients were assessed through a psychiatric evaluation and a wide neuropsychological battery. All subjects completed the comprehensive affect testing system (CATS), a group of computerized tests designed to evaluate emotion perception abilities. With respect to the controls, patients were not impaired in the CATS tasks involving discrimination of nonemotional prosody, naming of emotional stimuli expressed by voice and judging the emotional content of a sentence, whereas they showed a specific impairment in decoding emotion in a conflicting auditory condition and in the multichannel modality. Prosody impairment was affected by executive functions, attention and negative symptoms, while deficit in multisensory emotion recognition was affected by executive functions and negative symptoms. These emotion recognition deficits, rather than being associated purely with emotion perception disturbances in schizophrenia, are affected by core symptoms of the illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long
2012-12-30
The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1
Emotional content enhances true but not false memory for categorized stimuli.
Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna
2013-04-01
Past research has shown that emotion enhances true memory, but that emotion can either increase or decrease false memory. Two theoretical possibilities-the distinctiveness of emotional stimuli and the conceptual relatedness of emotional content-have been implicated as being responsible for influencing both true and false memory for emotional content. In the present study, we sought to identify the mechanisms that underlie these mixed findings by equating the thematic relatedness of the study materials across each type of valence used (negative, positive, or neutral). In three experiments, categorically bound stimuli (e.g., funeral, pets, and office items) were used for this purpose. When the encoding task required the processing of thematic relatedness, a significant true-memory enhancement for emotional content emerged in recognition memory, but no emotional boost to false memory (exp. 1). This pattern persisted for true memory with a longer retention interval between study and test (24 h), and false recognition was reduced for emotional items (exp. 2). Finally, better recognition memory for emotional items once again emerged when the encoding task (arousal ratings) required the processing of the emotional aspect of the study items, with no emotional boost to false recognition (EXP. 3). Together, these findings suggest that when emotional and neutral stimuli are equivalently high in thematic relatedness, emotion continues to improve true memory, but it does not override other types of grouping to increase false memory.
Colzato, Lorenza S; Sellaro, Roberta; Beste, Christian
2017-07-01
Charles Darwin proposed that via the vagus nerve, the tenth cranial nerve, emotional facial expressions are evolved, adaptive and serve a crucial communicative function. In line with this idea, the later-developed polyvagal theory assumes that the vagus nerve is the key phylogenetic substrate that regulates emotional and social behavior. The polyvagal theory assumes that optimal social interaction, which includes the recognition of emotion in faces, is modulated by the vagus nerve. So far, in humans, it has not yet been demonstrated that the vagus plays a causal role in emotion recognition. To investigate this we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that modulates brain activity via bottom-up mechanisms. A sham/placebo-controlled, randomized cross-over within-subjects design was used to infer a causal relation between the stimulated vagus nerve and the related ability to recognize emotions as indexed by the Reading the Mind in the Eyes Test in 38 healthy young volunteers. Active tVNS, compared to sham stimulation, enhanced emotion recognition for easy items, suggesting that it promoted the ability to decode salient social cues. Our results confirm that the vagus nerve is causally involved in emotion recognition, supporting Darwin's argumentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease
Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul
2016-01-01
According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393
Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang
2018-05-01
Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.
Emotional System for Military Target Identification
2009-10-01
algorithm [23], and used it to solve a facial recognition problem. In other works [24,25], we explored the potential of using emotional neural...other application areas, such as security ( facial recognition ) and medical (blood cell identification), can be also efficiently used in military...Application of an emotional neural network to facial recognition . Neural Computing and Applications, 18(4), 309-320. [25] Khashman, A. (2009). Blood cell
Emotional facial recognition in proactive and reactive violent offenders.
Philipp-Wiegmann, Florence; Rösler, Michael; Retz-Junginger, Petra; Retz, Wolfgang
2017-10-01
The purpose of this study is to analyse individual differences in the ability of emotional facial recognition in violent offenders, who were characterised as either reactive or proactive in relation to their offending. In accordance with findings of our previous study, we expected higher impairments in facial recognition in reactive than proactive violent offenders. To assess the ability to recognize facial expressions, the computer-based Facial Emotional Expression Labeling Test (FEEL) was performed. Group allocation of reactive und proactive violent offenders and assessment of psychopathic traits were performed by an independent forensic expert using rating scales (PROREA, PCL-SV). Compared to proactive violent offenders and controls, the performance of emotion recognition in the reactive offender group was significantly lower, both in total and especially in recognition of negative emotions such as anxiety (d = -1.29), sadness (d = -1.54), and disgust (d = -1.11). Furthermore, reactive violent offenders showed a tendency to interpret non-anger emotions as anger. In contrast, proactive violent offenders performed as well as controls. General and specific deficits in reactive violent offenders are in line with the results of our previous study and correspond to predictions of the Integrated Emotion System (IES, 7) and the hostile attribution processes (21). Due to the different error pattern in the FEEL test, the theoretical distinction between proactive and reactive aggression can be supported based on emotion recognition, even though aggression itself is always a heterogeneous act rather than a distinct one-dimensional concept.
Early effects of duloxetine on emotion recognition in healthy volunteers
Bamford, Susan; Penton-Voak, Ian; Pinkney, Verity; Baldwin, David S; Munafò, Marcus R; Garner, Matthew
2015-01-01
The serotonin-noradrenaline reuptake inhibitor (SNRI) duloxetine is an effective treatment for major depression and generalised anxiety disorder. Neuropsychological models of antidepressant drug action suggest therapeutic effects might be mediated by the early correction of maladaptive biases in emotion processing, including the recognition of emotional expressions. Sub-chronic administration of duloxetine (for two weeks) produces adaptive changes in neural circuitry implicated in emotion processing; however, its effects on emotional expression recognition are unknown. Forty healthy participants were randomised to receive either 14 days of duloxetine (60 mg/day, titrated from 30 mg after three days) or matched placebo (with sham titration) in a double-blind, between-groups, repeated-measures design. On day 0 and day 14 participants completed a computerised emotional expression recognition task that measured sensitivity to the six primary emotions. Thirty-eight participants (19 per group) completed their course of tablets and were included in the analysis. Results provide evidence that duloxetine, compared to placebo, may reduce the accurate recognition of sadness. Drug effects were driven by changes in participants’ ability to correctly detect subtle expressions of sadness, with greater change observed in the placebo relative to the duloxetine group. These effects occurred in the absence of changes in mood. Our preliminary findings require replication, but complement recent evidence that sadness recognition is a therapeutic target in major depression, and a mechanism through which SNRIs could resolve negative biases in emotion processing to achieve therapeutic effects. PMID:25759400
Monetary incentives at retrieval promote recognition of involuntarily learned emotional information.
Yan, Chunping; Li, Yunyun; Zhang, Qin; Cui, Lixia
2018-03-07
Previous studies have suggested that the effects of reward on memory processes are affected by certain factors, but it remains unclear whether the effects of reward at retrieval on recognition processes are influenced by emotion. The event-related potential was used to investigate the combined effect of reward and emotion on memory retrieval and its neural mechanism. The behavioral results indicated that the reward at retrieval improved recognition performance under positive and negative emotional conditions. The event-related potential results indicated that there were significant interactions between the reward and emotion in the average amplitude during recognition, and the significant reward effects from the frontal to parietal brain areas appeared at 130-800 ms for positive pictures and at 190-800 ms for negative pictures, but there were no significant reward effects of neutral pictures; the reward effect of positive items appeared relatively earlier, starting at 130 ms, and that of negative pictures began at 190 ms. These results indicate that monetary incentives at retrieval promote recognition of involuntarily learned emotional information.
Emotion Recognition in Face and Body Motion in Bulimia Nervosa.
Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate
2017-11-01
Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang
2017-12-01
Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of treatment. Impaired facial emotion recognition represents a neurocognitive risk factor that should be taken into account in alcohol dependence treatment. Treatments targeting the improvement of these social cognition deficits in AUD may offer a promising future approach. Copyright © 2017 by the Research Society on Alcoholism.
Stroud, J B; Freeman, T P; Leech, R; Hindocha, C; Lawn, W; Nutt, D J; Curran, H V; Carhart-Harris, R L
2018-02-01
Depressed patients robustly exhibit affective biases in emotional processing which are altered by SSRIs and predict clinical outcome. The objective of this study is to investigate whether psilocybin, recently shown to rapidly improve mood in treatment-resistant depression (TRD), alters patients' emotional processing biases. Seventeen patients with treatment-resistant depression completed a dynamic emotional face recognition task at baseline and 1 month later after two doses of psilocybin with psychological support. Sixteen controls completed the emotional recognition task over the same time frame but did not receive psilocybin. We found evidence for a group × time interaction on speed of emotion recognition (p = .035). At baseline, patients were slower at recognising facial emotions compared with controls (p < .001). After psilocybin, this difference was remediated (p = .208). Emotion recognition was faster at follow-up compared with baseline in patients (p = .004, d = .876) but not controls (p = .263, d = .302). In patients, this change was significantly correlated with a reduction in anhedonia over the same time period (r = .640, p = .010). Psilocybin with psychological support appears to improve processing of emotional faces in treatment-resistant depression, and this correlates with reduced anhedonia. Placebo-controlled studies are warranted to follow up these preliminary findings.
ERIC Educational Resources Information Center
Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
2013-01-01
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…
Surguladze, Simon A; Chkonia, Eka D; Kezeli, Archil R; Roinishvili, Maya O; Stahl, Daniel; David, Anthony S
2012-05-01
Abnormalities in visual processing have been found consistently in schizophrenia patients, including deficits in early visual processing, perceptual organization, and facial emotion recognition. There is however no consensus as to whether these abnormalities represent heritable illness traits and what their contribution is to psychopathology. Fifty patients with schizophrenia, 61 of their first-degree healthy relatives, and 50 psychiatrically healthy volunteers were tested with regard to facial affect (FA) discrimination and susceptibility to develop the color-contingent illusion [the McCollough Effect (ME)]. Both patients and relatives demonstrated significantly lower accuracy in FA discrimination compared with controls. There was also a significant effect of familiality: Participants from the same families had more similar accuracy scores than those who belonged to different families. Experiments with the ME showed that schizophrenia patients required longer time to develop the illusion than relatives and controls, which indicated poor visual adaptation in schizophrenia. Relatives were marginally slower than controls. There was no significant association between the measures of FA discrimination accuracy and ME in any of the participant groups. Facial emotion discrimination was associated with the degree of interpersonal problems, as measured by the Schizotypal Personality Questionnaire in relatives and healthy volunteers, whereas the ME was associated with the perceptual-cognitive symptoms of schizotypy and positive symptoms of schizophrenia. Our results support the heritability of FA discrimination deficits as a trait and indicate visual adaptation abnormalities in schizophrenia, which are symptom related.
Multimodal approaches for emotion recognition: a survey
NASA Astrophysics Data System (ADS)
Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.
2004-12-01
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.
Multimodal approaches for emotion recognition: a survey
NASA Astrophysics Data System (ADS)
Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.
2005-01-01
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.
Romero, Neri L
2017-06-01
A common social impairment in individuals with ASD is difficulty interpreting and or predicting emotions of others. To date, several interventions targeting teaching emotion recognition and understanding have been utilized both by researchers and practitioners. The results suggest that teaching emotion recognition is possible, but that the results do not generalize to non-instructional contexts. This study sought to replicate earlier findings of a positive impact of teaching emotion recognition using a computer-based intervention and to extend it by testing for generalization on live models in the classroom setting. Two boys and one girl, four to eight years in age, educated in self-contained classrooms for students with communication and social skills deficits, participated in this study. A multiple probe across participants design was utilized. Measures of emotion recognition and understanding were assessed at baseline, intervention, and one month post-intervention to determine maintenance effects. Social validity was assessed through parent and teacher questionnaires. All participants showed improvements in measures assessing their recognition of emotions in faces, generalized knowledge to live models, and maintained gains one month post intervention. These preliminary results are encouraging and should be utilized to inform a group design, in order to test efficacy with a larger population. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bat-Pitault, F; Da Fonseca, D; Flori, S; Porcher-Guinet, V; Stagnara, C; Patural, H; Franco, P; Deruelle, C
2017-10-01
The emotional process is characterized by a negative bias in depression, thus it was legitimate to establish if they same is true in very young at-risk children. Furthermore, sleep, also proposed as a marker of the depression risk, is closely linked in adults and adolescents with emotions. That is why we wanted first to better describe the characteristics of emotional recognition by 3-year-olds and their links with sleep. Secondly we observed, if found at this young age, an emotional recognition pattern indicating a vulnerability to depression. We studied, in 133 children aged 36 months from the AuBE cohort, the number of correct answers to the task of recognition of facial emotions (joy, anger and sadness). Cognitive functions were also assessed by the WPPSI III at 3 years old, and the different sleep parameters (time of light off and light on, sleep times, difficulty to go to sleep and number of parents' awakes per night) were described by questionnaires filled out by mothers at 6, 12, 18, 24 and 36 months after birth. Of these 133 children, 21 children whose mothers had at least one history of depression (13 boys) were the high-risk group and 19 children (8 boys) born to women with no history of depression were the low-risk group (or control group). Overall, 133 children by the age of 36 months recognize significantly better happiness than other emotions (P=0.000) with a better global recognition higher in girls (M=8.8) than boys (M=7.8) (P=0.013) and a positive correlation between global recognition ability and verbal IQ (P=0.000). Children who have less daytime sleep at 18 months and those who sleep less at 24 months show a better recognition of sadness (P=0.043 and P=0.042); those with difficulties at bedtime at 18 months recognize less happiness (P=0.043), and those who awaken earlier at 24 months have a better global recognition of emotions (P=0.015). Finally, the boys of the high-risk group recognize sadness better than boys in the control group (P=0.015). This study confirms that the recognition of emotion is related to development with a female advantage and a link with the language skills at 36 months of life. More importantly, we found a relationship between sleep characteristics and emotional recognition ability and a negative bias in emotional recognition in young males at risk for depression. Copyright © 2016 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Biologically inspired emotion recognition from speech
NASA Astrophysics Data System (ADS)
Caponetti, Laura; Buscicchio, Cosimo Alessandro; Castellano, Giovanna
2011-12-01
Emotion recognition has become a fundamental task in human-computer interaction systems. In this article, we propose an emotion recognition approach based on biologically inspired methods. Specifically, emotion classification is performed using a long short-term memory (LSTM) recurrent neural network which is able to recognize long-range dependencies between successive temporal patterns. We propose to represent data using features derived from two different models: mel-frequency cepstral coefficients (MFCC) and the Lyon cochlear model. In the experimental phase, results obtained from the LSTM network and the two different feature sets are compared, showing that features derived from the Lyon cochlear model give better recognition results in comparison with those obtained with the traditional MFCC representation.
Effect of Dopamine Therapy on Nonverbal Affect Burst Recognition in Parkinson's Disease
Péron, Julie; Grandjean, Didier; Drapier, Sophie; Vérin, Marc
2014-01-01
Background Parkinson's disease (PD) provides a model for investigating the involvement of the basal ganglia and mesolimbic dopaminergic system in the recognition of emotions from voices (i.e., emotional prosody). Although previous studies of emotional prosody recognition in PD have reported evidence of impairment, none of them compared PD patients at different stages of the disease, or ON and OFF dopamine replacement therapy, making it difficult to determine whether their impairment was due to general cognitive deterioration or to a more specific dopaminergic deficit. Methods We explored the involvement of the dopaminergic pathways in the recognition of nonverbal affect bursts (onomatopoeias) in 15 newly diagnosed PD patients in the early stages of the disease, 15 PD patients in the advanced stages of the disease and 15 healthy controls. The early PD group was studied in two conditions: ON and OFF dopaminergic therapy. Results Results showed that the early PD patients performed more poorly in the ON condition than in the OFF one, for overall emotion recognition, as well as for the recognition of anger, disgust and fear. Additionally, for anger, the early PD ON patients performed more poorly than controls. For overall emotion recognition, both advanced PD patients and early PD ON patients performed more poorly than controls. Analysis of continuous ratings on target and nontarget visual analog scales confirmed these patterns of results, showing a systematic emotional bias in both the advanced PD and early PD ON (but not OFF) patients compared with controls. Conclusions These results i) confirm the involvement of the dopaminergic pathways and basal ganglia in emotional prosody recognition, and ii) suggest a possibly deleterious effect of dopatherapy on affective abilities in the early stages of PD. PMID:24651759
Sellaro, Roberta; de Gelder, Beatrice; Finisguerra, Alessandra; Colzato, Lorenza S
2018-02-01
The polyvagal theory suggests that the vagus nerve is the key phylogenetic substrate enabling optimal social interactions, a crucial aspect of which is emotion recognition. A previous study showed that the vagus nerve plays a causal role in mediating people's ability to recognize emotions based on images of the eye region. The aim of this study is to verify whether the previously reported causal link between vagal activity and emotion recognition can be generalized to situations in which emotions must be inferred from images of whole faces and bodies. To this end, we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that causes the vagus nerve to fire by the application of a mild electrical stimulation to the auricular branch of the vagus nerve, located in the anterior protuberance of the outer ear. In two separate sessions, participants received active or sham tVNS before and while performing two emotion recognition tasks, aimed at indexing their ability to recognize emotions from facial and bodily expressions. Active tVNS, compared to sham stimulation, enhanced emotion recognition for whole faces but not for bodies. Our results confirm and further extend recent observations supporting a causal relationship between vagus nerve activity and the ability to infer others' emotional state, but restrict this association to situations in which the emotional state is conveyed by the whole face and/or by salient facial cues, such as eyes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Making sense of self-conscious emotion: linking theory of mind and emotion in children with autism.
Heerey, Erin A; Keltner, Dacher; Capps, Lisa M
2003-12-01
Self-conscious emotions such as embarrassment and shame are associated with 2 aspects of theory of mind (ToM): (a) the ability to understand that behavior has social consequences in the eyes of others and (b) an understanding of social norms violations. The present study aimed to link ToM with the recognition of self-conscious emotion. Children with and without autism identified facial expressions conscious of self-conscious and non-self-conscious emotions from photographs. ToM was also measured. Children with autism performed more poorly than comparison children at identifying self-conscious emotions, though they did not differ in the recognition of non-self-conscious emotions. When ToM ability was statistically controlled, group differences in the recognition of self-conscious emotion disappeared. Discussion focused on the links between ToM and self-conscious emotion.
Rapid communication: Global-local processing affects recognition of distractor emotional faces.
Srinivasan, Narayanan; Gupta, Rashmi
2011-03-01
Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.
Koelkebeck, Katja; Kohl, Waldemar; Luettgenau, Julia; Triantafillou, Susanna; Ohrmann, Patricia; Satoh, Shinji; Minoshita, Seiko
2015-07-30
A novel emotion recognition task that employs photos of a Japanese mask representing a highly ambiguous stimulus was evaluated. As non-Asians perceive and/or label emotions differently from Asians, we aimed to identify patterns of task-performance in non-Asian healthy volunteers with a view to future patient studies. The Noh mask test was presented to 42 adult German participants. Reaction times and emotion attribution patterns were recorded. To control for emotion identification abilities, a standard emotion recognition task was used among others. Questionnaires assessed personality traits. Finally, results were compared to age- and gender-matched Japanese volunteers. Compared to other tasks, German participants displayed slowest reaction times on the Noh mask test, indicating higher demands of ambiguous emotion recognition. They assigned more positive emotions to the mask than Japanese volunteers, demonstrating culture-dependent emotion identification patterns. As alexithymic and anxious traits were associated with slower reaction times, personality dimensions impacted on performance, as well. We showed an advantage of ambiguous over conventional emotion recognition tasks. Moreover, we determined emotion identification patterns in Western individuals impacted by personality dimensions, suggesting performance differences in clinical samples. Due to its properties, the Noh mask test represents a promising tool in the differential diagnosis of psychiatric disorders, e.g. schizophrenia. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Golan, Ofer; Baron-Cohen, Simon; Hill, Jacqueline
2006-02-01
Adults with Asperger Syndrome (AS) can recognise simple emotions and pass basic theory of mind tasks, but have difficulties recognising more complex emotions and mental states. This study describes a new battery of tasks, testing recognition of 20 complex emotions and mental states from faces and voices. The battery was given to males and females with AS and matched controls. Results showed the AS group performed worse than controls overall, on emotion recognition from faces and voices and on 12/20 specific emotions. Females recognised faces better than males regardless of diagnosis, and males with AS had more difficulties recognising emotions from faces than from voices. The implications of these results are discussed in relation to social functioning in AS.
Emotion-attention interactions in recognition memory for distractor faces.
Srinivasan, Narayanan; Gupta, Rashmi
2010-04-01
Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.
Allgood, Rebecca; Heaton, Pamela
2015-09-01
Although the configurations of psychoacoustic cues signalling emotions in human vocalizations and instrumental music are very similar, cross-domain links in recognition performance have yet to be studied developmentally. Two hundred and twenty 5- to 10-year-old children were asked to identify musical excerpts and vocalizations as happy, sad, or fearful. The results revealed age-related increases in overall recognition performance with significant correlations across vocal and musical conditions at all developmental stages. Recognition scores were greater for musical than vocal stimuli and were superior in females compared with males. These results confirm that recognition of emotions in vocal and musical stimuli is linked by 5 years and that sensitivity to emotions in auditory stimuli is influenced by age and gender. © 2015 The British Psychological Society.
Recognition of face identity and emotion in expressive specific language impairment.
Merkenschlager, A; Amorosa, H; Kiefl, H; Martinius, J
2012-01-01
To study face and emotion recognition in children with mostly expressive specific language impairment (SLI-E). A test movie to study perception and recognition of faces and mimic-gestural expression was applied to 24 children diagnosed as suffering from SLI-E and an age-matched control group of normally developing children. Compared to a normal control group, the SLI-E children scored significantly worse in both the face and expression recognition tasks with a preponderant effect on emotion recognition. The performance of the SLI-E group could not be explained by reduced attention during the test session. We conclude that SLI-E is associated with a deficiency in decoding non-verbal emotional facial and gestural information, which might lead to profound and persistent problems in social interaction and development. Copyright © 2012 S. Karger AG, Basel.
[Recognition of facial expression of emotions in Parkinson's disease: a theoretical review].
Alonso-Recio, L; Serrano-Rodriguez, J M; Carvajal-Molina, F; Loeches-Alonso, A; Martin-Plasencia, P
2012-04-16
Emotional facial expression is a basic guide during social interaction and, therefore, alterations in their expression or recognition are important limitations for communication. To examine facial expression recognition abilities and their possible impairment in Parkinson's disease. First, we review the studies on this topic which have not found entirely similar results. Second, we analyze the factors that may explain these discrepancies and, in particular, as third objective, we consider the relationship between emotional recognition problems and cognitive impairment associated with the disease. Finally, we propose alternatives strategies for the development of studies that could clarify the state of these abilities in Parkinson's disease. Most studies suggest deficits in facial expression recognition, especially in those with negative emotional content. However, it is possible that these alterations are related to those that also appear in the course of the disease in other perceptual and executive processes. To advance in this issue, we consider necessary to design emotional recognition studies implicating differentially the executive or visuospatial processes, and/or contrasting cognitive abilities with facial expressions and non emotional stimuli. The precision of the status of these abilities, as well as increase our knowledge of the functional consequences of the characteristic brain damage in the disease, may indicate if we should pay special attention in their rehabilitation inside the programs implemented.
Intact anger recognition in depression despite aberrant visual facial information usage.
Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M
2014-08-01
Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.
Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M
2014-12-01
Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.
Enrici, Ivan; Adenzato, Mauro; Ardito, Rita B.; Mitkova, Antonia; Cavallo, Marco; Zibetti, Maurizio; Lopiano, Leonardo; Castelli, Lorys
2015-01-01
Background Parkinson’s disease (PD) is characterised by well-known motor symptoms, whereas the presence of cognitive non-motor symptoms, such as emotional disturbances, is still underestimated. One of the major problems in studying emotion deficits in PD is an atomising approach that does not take into account different levels of emotion elaboration. Our study addressed the question of whether people with PD exhibit difficulties in one or more specific dimensions of emotion processing, investigating three different levels of analyses, that is, recognition, representation, and regulation. Methodology Thirty-two consecutive medicated patients with PD and 25 healthy controls were enrolled in the study. Participants performed a three-level analysis assessment of emotional processing using quantitative standardised emotional tasks: the Ekman 60-Faces for emotion recognition, the full 36-item version of the Reading the Mind in the Eyes (RME) for emotion representation, and the 20-item Toronto Alexithymia Scale (TAS-20) for emotion regulation. Principal Findings Regarding emotion recognition, patients obtained significantly worse scores than controls in the total score of Ekman 60-Faces but not in any other basic emotions. For emotion representation, patients obtained significantly worse scores than controls in the RME experimental score but no in the RME gender control task. Finally, on emotion regulation, PD and controls did not perform differently at TAS-20 and no specific differences were found on TAS-20 subscales. The PD impairments on emotion recognition and representation do not correlate with dopamine therapy, disease severity, or with the duration of illness. These results are independent from other cognitive processes, such as global cognitive status and executive function, or from psychiatric status, such as depression, anxiety or apathy. Conclusions These results may contribute to better understanding of the emotional problems that are often seen in patients with PD and the measures used to test these problems, in particular on the use of different versions of the RME task. PMID:26110271
ERIC Educational Resources Information Center
Cebula, Katie R.; Wishart, Jennifer G.; Willis, Diane S.; Pitcairn, Tom K.
2017-01-01
Some children with Down syndrome may experience difficulties in recognizing facial emotions, particularly fear, but it is not clear why, nor how such skills can best be facilitated. Using a photo-matching task, emotion recognition was tested in children with Down syndrome, children with nonspecific intellectual disability and cognitively matched,…
Sawyer, Alyssa C P; Williamson, Paul; Young, Robyn L
2012-04-01
Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition deficit. This explanation was investigated using a newly developed emotion and mental state recognition task. Individuals with Asperger's Syndrome were less accurate at recognising emotions and mental states, but did not show evidence of gaze avoidance compared to individuals without Asperger's Syndrome. This suggests that the way individuals with Asperger's Syndrome look at faces cannot account for the difficulty they have recognising expressions.
The effect of unimodal affective priming on dichotic emotion recognition.
Voyer, Daniel; Myles, Daniel
2017-11-15
The present report concerns two experiments extending to unimodal priming the cross-modal priming effects observed with auditory emotions by Harding and Voyer [(2016). Laterality effects in cross-modal affective priming. Laterality: Asymmetries of Body, Brain and Cognition, 21, 585-605]. Experiment 1 used binaural targets to establish the presence of the priming effect and Experiment 2 used dichotically presented targets to examine auditory asymmetries. In Experiment 1, 82 university students completed a task in which binaural targets consisting of one of 4 English words inflected in one of 4 emotional tones were preceded by binaural primes consisting of one of 4 Mandarin words pronounced in the same (congruent) or different (incongruent) emotional tones. Trials where the prime emotion was congruent with the target emotion showed faster responses and higher accuracy in identifying the target emotion. In Experiment 2, 60 undergraduate students participated and the target was presented dichotically instead of binaurally. Primes congruent with the left ear produced a large left ear advantage, whereas right congruent primes produced a right ear advantage. These results indicate that unimodal priming produces stronger effects than those observed under cross-modal priming. The findings suggest that priming should likely be considered a strong top-down influence on laterality effects.
Demirci, Esra; Erdogan, Ayten
2016-12-01
The objectives of this study were to evaluate both face and emotion recognition, to detect differences among attention deficit and hyperactivity disorder (ADHD) subgroups, to identify effects of the gender and to assess the effects of methylphenidate and atomoxetine treatment on both face and emotion recognition in patients with ADHD. The study sample consisted of 41 male, 29 female patients, 8-15 years of age, who were diagnosed as having combined type ADHD (N = 26), hyperactive/impulsive type ADHD (N = 21) or inattentive type ADHD (N = 23) but had not previously used any medication for ADHD and 35 male, 25 female healthy individuals. Long-acting methylphenidate (OROS-MPH) was prescribed to 38 patients, whereas atomoxetine was prescribed to 32 patients. The reading the mind in the eyes test (RMET) and Benton face recognition test (BFRT) were applied to all participants before and after treatment. The patients with ADHD had a significantly lower number of correct answers in child and adolescent RMET and in BFRT than the healthy controls. Among the ADHD subtypes, the hyperactive/impulsive subtype had a lower number of correct answers in the RMET than the inattentive subtypes, and the hyperactive/impulsive subtype had a lower number of correct answers in short and long form of BFRT than the combined and inattentive subtypes. Male and female patients with ADHD did not differ significantly with respect to the number of correct answers on the RMET and BFRT. The patients showed significant improvement in RMET and BFRT after treatment with OROS-MPH or atomoxetine. Patients with ADHD have difficulties in face recognition as well as emotion recognition. Both OROS-MPH and atomoxetine affect emotion recognition. However, further studies on the face and emotion recognition are needed in ADHD.
Postsurgical Disfigurement Influences Disgust Recognition: A Case-Control Study.
Lisan, Quentin; George, Nathalie; Hans, Stephane; Laccourreye, Ollivier; Lemogne, Cédric
Little is known about how emotion recognition may be modified in individuals prone to elicit disgust. We sought to determine if subjects with total laryngectomy would present a modified recognition of facial expressions of disgust. A total of 29 patients presenting with a history of advanced-stage laryngeal cancer were recruited, 17 being surgically treated (total laryngectomy) and 12 treated with chemoradiation therapy only. Based on a validated set of images of facial expressions of fear, disgust, surprise, happiness, sadness and anger displayed by 6 actors, we presented participants with expressions of each emotion at 5 levels of increasing intensity and measured their ability to recognize these emotions. Participants with (vs without) laryngectomy showed a higher threshold for the recognition of disgust (3.2. vs 2.7 images needed before emotion recognition, p = 0.03) and a lower success rate of correct recognition (75.5% vs 88.9%, p = 0.03). Subjects presenting with an aesthetic impairment of the head and neck showed poorer performance in disgust recognition when compared with those without disfigurement. These findings might relate either to some perceptual adaptation, habituation phenomenon, or to some higher-level processes related to emotion regulation strategies. Copyright © 2018 Academy of Consultation-Liaison Psychiatry. Published by Elsevier Inc. All rights reserved.
Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M
2017-05-01
This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).