Textual emotion recognition for enhancing enterprise computing
NASA Astrophysics Data System (ADS)
Quan, Changqin; Ren, Fuji
2016-05-01
The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.
Improvement of emotional healthcare system with stress detection from ECG signal.
Tivatansakul, S; Ohkura, M
2015-01-01
Our emotional healthcare system is designed to cope with users' negative emotions in daily life. To make the system more intelligent, we integrated emotion recognition by facial expression to provide appropriate services based on user's current emotional state. Our emotion recognition by facial expression has confusion issue to recognize some positive, neutral and negative emotions that make the emotional healthcare system provide a relaxation service even though users don't have negative emotions. Therefore, to increase the effectiveness of the system to provide the relaxation service, we integrate stress detection from ECG signal. The stress detection might be able to address the confusion issue of emotion recognition by facial expression to provide the service. Indeed, our results show that integration of stress detection increases the effectiveness and efficiency of the emotional healthcare system to provide services.
Speech emotion recognition methods: A literature review
NASA Astrophysics Data System (ADS)
Basharirad, Babak; Moradhaseli, Mohammadreza
2017-10-01
Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.
An audiovisual emotion recognition system
NASA Astrophysics Data System (ADS)
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
2007-12-01
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
Kumfor, Fiona; Irish, Muireann; Hodges, John R.; Piguet, Olivier
2013-01-01
Patients with frontotemporal dementia have pervasive changes in emotion recognition and social cognition, yet the neural changes underlying these emotion processing deficits remain unclear. The multimodal system model of emotion proposes that basic emotions are dependent on distinct brain regions, which undergo significant pathological changes in frontotemporal dementia. As such, this syndrome may provide important insight into the impact of neural network degeneration upon the innate ability to recognise emotions. This study used voxel-based morphometry to identify discrete neural correlates involved in the recognition of basic emotions (anger, disgust, fear, sadness, surprise and happiness) in frontotemporal dementia. Forty frontotemporal dementia patients (18 behavioural-variant, 11 semantic dementia, 11 progressive nonfluent aphasia) and 27 healthy controls were tested on two facial emotion recognition tasks: The Ekman 60 and Ekman Caricatures. Although each frontotemporal dementia group showed impaired recognition of negative emotions, distinct associations between emotion-specific task performance and changes in grey matter intensity emerged. Fear recognition was associated with the right amygdala; disgust recognition with the left insula; anger recognition with the left middle and superior temporal gyrus; and sadness recognition with the left subcallosal cingulate, indicating that discrete neural substrates are necessary for emotion recognition in frontotemporal dementia. The erosion of emotion-specific neural networks in neurodegenerative disorders may produce distinct profiles of performance that are relevant to understanding the neurobiological basis of emotion processing. PMID:23805313
NASA Astrophysics Data System (ADS)
Poock, G. K.; Martin, B. J.
1984-02-01
This was an applied investigation examining the ability of a speech recognition system to recognize speakers' inputs when the speakers were under different stress levels. Subjects were asked to speak to a voice recognition system under three conditions: (1) normal office environment, (2) emotional stress, and (3) perceptual-motor stress. Results indicate a definite relationship between voice recognition system performance and the type of low stress reference patterns used to achieve recognition.
Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo
2016-01-01
Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.
Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality
Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque
2018-01-01
Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845
Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.
Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y
2018-02-01
Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.
On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information
NASA Astrophysics Data System (ADS)
Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.
Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.
Herba, Catherine; Phillips, Mary
2004-10-01
Intact emotion processing is critical for normal emotional development. Recent advances in neuroimaging have facilitated the examination of brain development, and have allowed for the exploration of the relationships between the development of emotion processing abilities, and that of associated neural systems. A literature review was performed of published studies examining the development of emotion expression recognition in normal children and psychiatric populations, and of the development of neural systems important for emotion processing. Few studies have explored the development of emotion expression recognition throughout childhood and adolescence. Behavioural studies suggest continued development throughout childhood and adolescence (reflected by accuracy scores and speed of processing), which varies according to the category of emotion displayed. Factors such as sex, socio-economic status, and verbal ability may also affect this development. Functional neuroimaging studies in adults highlight the role of the amygdala in emotion processing. Results of the few neuroimaging studies in children have focused on the role of the amygdala in the recognition of fearful expressions. Although results are inconsistent, they provide evidence throughout childhood and adolescence for the continued development of and sex differences in amygdalar function in response to fearful expressions. Studies exploring emotion expression recognition in psychiatric populations of children and adolescents suggest deficits that are specific to the type of disorder and to the emotion displayed. Results from behavioural and neuroimaging studies indicate continued development of emotion expression recognition and neural regions important for this process throughout childhood and adolescence. Methodological inconsistencies and disparate findings make any conclusion difficult, however. Further studies are required examining the relationship between the development of emotion expression recognition and that of underlying neural systems, in particular subcortical and prefrontal cortical structures. These will inform understanding of the neural bases of normal and abnormal emotional development, and aid the development of earlier interventions for children and adolescents with psychiatric disorders.
Facial recognition in education system
NASA Astrophysics Data System (ADS)
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
2017-11-01
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder
Goghari, Vina M; Sponheim, Scott R
2012-01-01
Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816
Caballero-Morales, Santiago-Omar
2013-01-01
An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410
NASA Astrophysics Data System (ADS)
Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.
2018-03-01
The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.
Utterance independent bimodal emotion recognition in spontaneous communication
NASA Astrophysics Data System (ADS)
Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng
2011-12-01
Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.
Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike
2016-02-01
Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed. © 2015 The British Psychological Society.
Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie
2015-03-01
Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Hindocha, Chandni; Freeman, Tom P.; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K.; Morgan, Celia J.A.; Curran, H. Valerie
2015-01-01
Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8 mg), CBD (16 mg), THC+CBD (8 mg+16 mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling ‘stoned’ was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being ‘stoned’. CBD did not influence feelings of ‘stoned’. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. PMID:25534187
Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition
NASA Astrophysics Data System (ADS)
Kim, Jonghwa; André, Elisabeth
This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.
Cost-sensitive learning for emotion robust speaker recognition.
Li, Dongdong; Yang, Yingchun; Dai, Weihui
2014-01-01
In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.
Cost-Sensitive Learning for Emotion Robust Speaker Recognition
Li, Dongdong; Yang, Yingchun
2014-01-01
In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved. PMID:24999492
How Deep Neural Networks Can Improve Emotion Recognition on Video Data
2016-09-25
HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA Pooya Khorrami1 , Tom Le Paine1, Kevin Brady2, Charlie Dagli2, Thomas S...this work, we present a system that per- forms emotion recognition on video data using both con- volutional neural networks (CNNs) and recurrent...neural net- works (RNNs). We present our findings on videos from the Audio/Visual+Emotion Challenge (AV+EC2015). In our experiments, we analyze the effects
Robust representation and recognition of facial emotions using extreme sparse learning.
Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang
2015-07-01
Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.
Emotional System for Military Target Identification
2009-10-01
algorithm [23], and used it to solve a facial recognition problem. In other works [24,25], we explored the potential of using emotional neural...other application areas, such as security ( facial recognition ) and medical (blood cell identification), can be also efficiently used in military...Application of an emotional neural network to facial recognition . Neural Computing and Applications, 18(4), 309-320. [25] Khashman, A. (2009). Blood cell
Impact of severity of drug use on discrete emotions recognition in polysubstance abusers.
Fernández-Serrano, María José; Lozano, Oscar; Pérez-García, Miguel; Verdejo-García, Antonio
2010-06-01
Neuropsychological studies support the association between severity of drug intake and alterations in specific cognitive domains and neural systems, but there is disproportionately less research on the neuropsychology of emotional alterations associated with addiction. One of the key aspects of adaptive emotional functioning potentially relevant to addiction progression and treatment is the ability to recognize basic emotions in the faces of others. Therefore, the aims of this study were: (i) to examine facial emotion recognition in abstinent polysubstance abusers, and (ii) to explore the association between patterns of quantity and duration of use of several drugs co-abused (including alcohol, cannabis, cocaine, heroin and MDMA) and the ability to identify discrete facial emotional expressions portraying basic emotions. We compared accuracy of emotion recognition of facial expressions portraying six basic emotions (measured with the Ekman Faces Test) between polysubstance abusers (PSA, n=65) and non-drug using comparison individuals (NDCI, n=30), and used regression models to explore the association between quantity and duration of use of the different drugs co-abused and indices of recognition of each of the six emotions, while controlling for relevant socio-demographic and affect-related confounders. Results showed: (i) that PSA had significantly poorer recognition than NDCI for facial expressions of anger, disgust, fear and sadness; (ii) that measures of quantity and duration of drugs used significantly predicted poorer discrete emotions recognition: quantity of cocaine use predicted poorer anger recognition, and duration of cocaine use predicted both poorer anger and fear recognition. Severity of cocaine use also significantly predicted overall recognition accuracy. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.
A Fuzzy Aproach For Facial Emotion Recognition
NASA Astrophysics Data System (ADS)
Gîlcă, Gheorghe; Bîzdoacă, Nicu-George
2015-09-01
This article deals with an emotion recognition system based on the fuzzy sets. Human faces are detected in images with the Viola - Jones algorithm and for its tracking in video sequences we used the Camshift algorithm. The detected human faces are transferred to the decisional fuzzy system, which is based on the variable fuzzyfication measurements of the face: eyebrow, eyelid and mouth. The system can easily determine the emotional state of a person.
Omar, Rohani; Henley, Susie M.D.; Bartlett, Jonathan W.; Hailstone, Julia C.; Gordon, Elizabeth; Sauter, Disa A.; Frost, Chris; Scott, Sophie K.; Warren, Jason D.
2011-01-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. PMID:21385617
Omar, Rohani; Henley, Susie M D; Bartlett, Jonathan W; Hailstone, Julia C; Gordon, Elizabeth; Sauter, Disa A; Frost, Chris; Scott, Sophie K; Warren, Jason D
2011-06-01
Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions. Copyright © 2011 Elsevier Inc. All rights reserved.
A nonlinear heartbeat dynamics model approach for personalized emotion recognition.
Valenza, Gaetano; Citi, Luca; Lanatà, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2013-01-01
Emotion recognition based on autonomic nervous system signs is one of the ambitious goals of affective computing. It is well-accepted that standard signal processing techniques require relative long-time series of multivariate records to ensure reliability and robustness of recognition and classification algorithms. In this work, we present a novel methodology able to assess cardiovascular dynamics during short-time (i.e. < 10 seconds) affective stimuli, thus overcoming some of the limitations of current emotion recognition approaches. We developed a personalized, fully parametric probabilistic framework based on point-process theory where heartbeat events are modelled using a 2(nd)-order nonlinear autoregressive integrative structure in order to achieve effective performances in short-time affective assessment. Experimental results show a comprehensive emotional characterization of 4 subjects undergoing a passive affective elicitation using a sequence of standardized images gathered from the international affective picture system. Each picture was identified by the IAPS arousal and valence scores as well as by a self-reported emotional label associating a subjective positive or negative emotion. Results show a clear classification of two defined levels of arousal, valence and self-emotional state using features coming from the instantaneous spectrum and bispectrum of the considered RR intervals, reaching up to 90% recognition accuracy.
Emotion recognition based on multiple order features using fractional Fourier transform
NASA Astrophysics Data System (ADS)
Ren, Bo; Liu, Deyin; Qi, Lin
2017-07-01
In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.
A study of speech emotion recognition based on hybrid algorithm
NASA Astrophysics Data System (ADS)
Zhu, Ju-xia; Zhang, Chao; Lv, Zhao; Rao, Yao-quan; Wu, Xiao-pei
2011-10-01
To effectively improve the recognition accuracy of the speech emotion recognition system, a hybrid algorithm which combines Continuous Hidden Markov Model (CHMM), All-Class-in-One Neural Network (ACON) and Support Vector Machine (SVM) is proposed. In SVM and ACON methods, some global statistics are used as emotional features, while in CHMM method, instantaneous features are employed. The recognition rate by the proposed method is 92.25%, with the rejection rate to be 0.78%. Furthermore, it obtains the relative increasing of 8.53%, 4.69% and 0.78% compared with ACON, CHMM and SVM methods respectively. The experiment result confirms the efficiency of distinguishing anger, happiness, neutral and sadness emotional states.
Biologically inspired emotion recognition from speech
NASA Astrophysics Data System (ADS)
Caponetti, Laura; Buscicchio, Cosimo Alessandro; Castellano, Giovanna
2011-12-01
Emotion recognition has become a fundamental task in human-computer interaction systems. In this article, we propose an emotion recognition approach based on biologically inspired methods. Specifically, emotion classification is performed using a long short-term memory (LSTM) recurrent neural network which is able to recognize long-range dependencies between successive temporal patterns. We propose to represent data using features derived from two different models: mel-frequency cepstral coefficients (MFCC) and the Lyon cochlear model. In the experimental phase, results obtained from the LSTM network and the two different feature sets are compared, showing that features derived from the Lyon cochlear model give better recognition results in comparison with those obtained with the traditional MFCC representation.
Assessment of Emotional Experience and Emotional Recognition in Complicated Grief
Fernández-Alcántara, Manuel; Cruz-Quintana, Francisco; Pérez-Marfil, M. N.; Catena-Martínez, Andrés; Pérez-García, Miguel; Turnbull, Oliver H.
2016-01-01
There is substantial evidence of bias in the processing of emotion in people with complicated grief (CG). Previous studies have tended to assess the expression of emotion in CG, but other aspects of emotion (mainly emotion recognition, and the subjective aspects of emotion) have not been addressed, despite their importance for practicing clinicians. A quasi-experimental design with two matched groups (Complicated Grief, N = 24 and Non-Complicated Grief, N = 20) was carried out. The Facial Expression of Emotion Test (emotion recognition), a set of pictures from the International Affective Picture System (subjective experience of emotion) and the Symptom Checklist 90 Revised (psychopathology) were employed. The CG group showed lower scores on the dimension of valence for specific conditions on the IAPS, related to the subjective experience of emotion. In addition, they presented higher values of psychopathology. In contrast, statistically significant results were not found for the recognition of emotion. In conclusion, from a neuropsychological point of view, the subjective aspects of emotion and psychopathology seem central in explaining the experience of those with CG. These results are clinically significant for psychotherapists and psychoanalysts working in the field of grief and loss. PMID:26903928
Bihippocampal damage with emotional dysfunction: impaired auditory recognition of fear.
Ghika-Schmid, F; Ghika, J; Vuilleumier, P; Assal, G; Vuadens, P; Scherer, K; Maeder, P; Uske, A; Bogousslavsky, J
1997-01-01
A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.
Park, Soowon; Kim, Taehoon; Shin, Seong A; Kim, Yu Kyeong; Sohn, Bo Kyung; Park, Hyeon-Ju; Youn, Jung-Hae; Lee, Jun-Young
2017-01-01
Background: Facial emotion recognition (FER) is impaired in individuals with frontotemporal dementia (FTD) and Alzheimer’s disease (AD) when compared to healthy older adults. Since deficits in emotion recognition are closely related to caregiver burden or social interactions, researchers have fundamental interest in FER performance in patients with dementia. Purpose: The purpose of this study was to identify the performance profiles of six facial emotions (i.e., fear, anger, disgust, sadness, surprise, and happiness) and neutral faces measured among Korean healthy control (HCs), and those with mild cognitive impairment (MCI), AD, and FTD. Additionally, the neuroanatomical correlates of facial emotions were investigated. Methods: A total of 110 (33 HC, 32 MCI, 32 AD, 13 FTD) older adult participants were recruited from two different medical centers in metropolitan areas of South Korea. These individuals underwent an FER test that was used to assess the recognition of emotions or absence of emotion (neutral) in 35 facial stimuli. Repeated measures two-way analyses of variance were used to examine the distinct profiles of emotional recognition among the four groups. We also performed brain imaging and voxel-based morphometry (VBM) on the participants to examine the associations between FER scores and gray matter volume. Results: The mean score of negative emotion recognition (i.e., fear, anger, disgust, and sadness) clearly discriminated FTD participants from individuals with MCI and AD and HC [F(3,106) = 10.829, p < 0.001, η2 = 0.235], whereas the mean score of positive emotion recognition (i.e., surprise and happiness) did not. A VBM analysis showed negative emotions were correlated with gray matter volume of anterior temporal regions, whereas positive emotions were related to gray matter volume of fronto-parietal regions. Conclusion: Impairment of negative FER in patients with FTD is cross-cultural. The discrete neural correlates of FER indicate that emotional recognition processing is a multi-modal system in the brain. Focusing on the negative emotion recognition is a more effective way to discriminate healthy aging, MCI, and AD from FTD in older Korean adults. PMID:29249960
Ipser, Jonathan C; Terburg, David; Syal, Supriya; Phillips, Nicole; Solms, Mark; Panksepp, Jaak; Malcolm-Smith, Susan; Thomas, Kevin; Stein, Dan J; van Honk, Jack
2013-01-01
In rodents, the endogenous opioid system has been implicated in emotion regulation, and in the reduction of fear in particular. In humans, while there is evidence that the opioid antagonist naloxone acutely enhances the acquisition of conditioned fear, there are no corresponding data on the effect of opioid agonists in moderating responses to fear. We investigated whether a single 0.2mg administration of the mu-opioid agonist buprenorphine would decrease fear sensitivity with an emotion-recognition paradigm. Healthy human subjects participated in a randomized placebo-controlled within-subject design, in which they performed a dynamic emotion recognition task 120min after administration of buprenorphine and placebo. In the recognition task, basic emotional expressions were morphed between their full expression and neutral in 2% steps, and presented as dynamic video-clips with final frames of different emotional intensity for each trial, which allows for a fine-grained measurement of emotion sensitivity. Additionally, visual analog scales were used to investigate acute effects of buprenorphine on mood. Compared to placebo, buprenorphine resulted in a significant reduction in the sensitivity for recognizing fearful facial expressions exclusively. Our data demonstrate, for the first time in humans, that acute up-regulation of the opioid system reduces fear recognition sensitivity. Moreover, the absence of an effect of buprenorphine on mood provides evidence of a direct influence of opioids upon the core fear system in the human brain. Copyright © 2012 Elsevier Ltd. All rights reserved.
Martinelli, Eugenio; Mencattini, Arianna; Daprati, Elena; Di Natale, Corrado
2016-01-01
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. Albeit numerous applications exist that enable machines to read facial emotions and recognize the content of verbal messages, methods for speech emotion recognition are still in their infancy. Yet, fast and reliable applications for emotion recognition are the obvious advancement of present 'intelligent personal assistants', and may have countless applications in diagnostics, rehabilitation and research. Taking inspiration from the dynamics of human group decision-making, we devised a novel speech emotion recognition system that applies, for the first time, a semi-supervised prediction model based on consensus. Three tests were carried out to compare this algorithm with traditional approaches. Labeling performances relative to a public database of spontaneous speeches are reported. The novel system appears to be fast, robust and less computationally demanding than traditional methods, allowing for easier implementation in portable voice-analyzers (as used in rehabilitation, research, industry, etc.) and for applications in the research domain (such as real-time pairing of stimuli to participants' emotional state, selective/differential data collection based on emotional content, etc.).
End-to-End Multimodal Emotion Recognition Using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Tzirakis, Panagiotis; Trigeorgis, George; Nicolaou, Mihalis A.; Schuller, Bjorn W.; Zafeiriou, Stefanos
2017-12-01
Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a Convolutional Neural Network (CNN) to extract features from the speech, while for the visual modality a deep residual network (ResNet) of 50 layers. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, Long Short-Term Memory (LSTM) networks are utilized. The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.
Castagna, Filomena; Montemagni, Cristiana; Maria Milani, Anna; Rocca, Giuseppe; Rocca, Paola; Casacchia, Massimo; Bogetto, Filippo
2013-02-28
This study aimed to evaluate the ability to decode emotion in the auditory and audiovisual modality in a group of patients with schizophrenia, and to explore the role of cognition and psychopathology in affecting these emotion recognition abilities. Ninety-four outpatients in a stable phase and 51 healthy subjects were recruited. Patients were assessed through a psychiatric evaluation and a wide neuropsychological battery. All subjects completed the comprehensive affect testing system (CATS), a group of computerized tests designed to evaluate emotion perception abilities. With respect to the controls, patients were not impaired in the CATS tasks involving discrimination of nonemotional prosody, naming of emotional stimuli expressed by voice and judging the emotional content of a sentence, whereas they showed a specific impairment in decoding emotion in a conflicting auditory condition and in the multichannel modality. Prosody impairment was affected by executive functions, attention and negative symptoms, while deficit in multisensory emotion recognition was affected by executive functions and negative symptoms. These emotion recognition deficits, rather than being associated purely with emotion perception disturbances in schizophrenia, are affected by core symptoms of the illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A Multimodal Emotion Detection System during Human-Robot Interaction
Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.
2013-01-01
In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598
Smitha, K G; Vinod, A P
2015-11-01
Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.
McCubbin, James A; Loveless, James P; Graham, Jack G; Hall, Gabrielle A; Bart, Ryan M; Moore, DeWayne D; Merritt, Marcellus M; Lane, Richard D; Thayer, Julian F
2014-02-01
Persons with higher blood pressure have emotional dampening in some contexts. This may reflect interactive changes in central nervous system control of affect and autonomic function in the early stages of hypertension development. The purpose of this study is to determine the independence of cardiovascular emotional dampening from alexithymia to better understand the role of affect dysregulation in blood pressure elevations. Ninety-six normotensives were assessed for resting systolic and diastolic (DBP) blood pressure, recognition of emotions in faces and sentences using the Perception of Affect Task (PAT), alexithymia, anxiety, and defensiveness. Resting DBP significantly predicted PAT emotion recognition accuracy in men after adjustment for age, self-reported affect, and alexithymia. Cardiovascular emotional dampening is independent of alexithymia and affect in men. Dampened emotion recognition could potentially influence interpersonal communication and psychosocial distress, thereby further contributing to BP dysregulation and increased cardiovascular risk.
Martinelli, Eugenio; Mencattini, Arianna; Di Natale, Corrado
2016-01-01
Humans can communicate their emotions by modulating facial expressions or the tone of their voice. Albeit numerous applications exist that enable machines to read facial emotions and recognize the content of verbal messages, methods for speech emotion recognition are still in their infancy. Yet, fast and reliable applications for emotion recognition are the obvious advancement of present ‘intelligent personal assistants’, and may have countless applications in diagnostics, rehabilitation and research. Taking inspiration from the dynamics of human group decision-making, we devised a novel speech emotion recognition system that applies, for the first time, a semi-supervised prediction model based on consensus. Three tests were carried out to compare this algorithm with traditional approaches. Labeling performances relative to a public database of spontaneous speeches are reported. The novel system appears to be fast, robust and less computationally demanding than traditional methods, allowing for easier implementation in portable voice-analyzers (as used in rehabilitation, research, industry, etc.) and for applications in the research domain (such as real-time pairing of stimuli to participants’ emotional state, selective/differential data collection based on emotional content, etc.). PMID:27563724
Emotional facial recognition in proactive and reactive violent offenders.
Philipp-Wiegmann, Florence; Rösler, Michael; Retz-Junginger, Petra; Retz, Wolfgang
2017-10-01
The purpose of this study is to analyse individual differences in the ability of emotional facial recognition in violent offenders, who were characterised as either reactive or proactive in relation to their offending. In accordance with findings of our previous study, we expected higher impairments in facial recognition in reactive than proactive violent offenders. To assess the ability to recognize facial expressions, the computer-based Facial Emotional Expression Labeling Test (FEEL) was performed. Group allocation of reactive und proactive violent offenders and assessment of psychopathic traits were performed by an independent forensic expert using rating scales (PROREA, PCL-SV). Compared to proactive violent offenders and controls, the performance of emotion recognition in the reactive offender group was significantly lower, both in total and especially in recognition of negative emotions such as anxiety (d = -1.29), sadness (d = -1.54), and disgust (d = -1.11). Furthermore, reactive violent offenders showed a tendency to interpret non-anger emotions as anger. In contrast, proactive violent offenders performed as well as controls. General and specific deficits in reactive violent offenders are in line with the results of our previous study and correspond to predictions of the Integrated Emotion System (IES, 7) and the hostile attribution processes (21). Due to the different error pattern in the FEEL test, the theoretical distinction between proactive and reactive aggression can be supported based on emotion recognition, even though aggression itself is always a heterogeneous act rather than a distinct one-dimensional concept.
The development of cross-cultural recognition of vocal emotion during childhood and adolescence.
Chronaki, Georgia; Wigelsworth, Michael; Pell, Marc D; Kotz, Sonja A
2018-06-14
Humans have an innate set of emotions recognised universally. However, emotion recognition also depends on socio-cultural rules. Although adults recognise vocal emotions universally, they identify emotions more accurately in their native language. We examined developmental trajectories of universal vocal emotion recognition in children. Eighty native English speakers completed a vocal emotion recognition task in their native language (English) and foreign languages (Spanish, Chinese, and Arabic) expressing anger, happiness, sadness, fear, and neutrality. Emotion recognition was compared across 8-to-10, 11-to-13-year-olds, and adults. Measures of behavioural and emotional problems were also taken. Results showed that although emotion recognition was above chance for all languages, native English speaking children were more accurate in recognising vocal emotions in their native language. There was a larger improvement in recognising vocal emotion from the native language during adolescence. Vocal anger recognition did not improve with age for the non-native languages. This is the first study to demonstrate universality of vocal emotion recognition in children whilst supporting an "in-group advantage" for more accurate recognition in the native language. Findings highlight the role of experience in emotion recognition, have implications for child development in modern multicultural societies and address important theoretical questions about the nature of emotions.
Support vector machine for automatic pain recognition
NASA Astrophysics Data System (ADS)
Monwar, Md Maruf; Rezaei, Siamak
2009-02-01
Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.
Emotion recognition based on physiological changes in music listening.
Kim, Jonghwa; André, Elisabeth
2008-12-01
Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.
Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka
2014-01-01
Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.
Effect of Dopamine Therapy on Nonverbal Affect Burst Recognition in Parkinson's Disease
Péron, Julie; Grandjean, Didier; Drapier, Sophie; Vérin, Marc
2014-01-01
Background Parkinson's disease (PD) provides a model for investigating the involvement of the basal ganglia and mesolimbic dopaminergic system in the recognition of emotions from voices (i.e., emotional prosody). Although previous studies of emotional prosody recognition in PD have reported evidence of impairment, none of them compared PD patients at different stages of the disease, or ON and OFF dopamine replacement therapy, making it difficult to determine whether their impairment was due to general cognitive deterioration or to a more specific dopaminergic deficit. Methods We explored the involvement of the dopaminergic pathways in the recognition of nonverbal affect bursts (onomatopoeias) in 15 newly diagnosed PD patients in the early stages of the disease, 15 PD patients in the advanced stages of the disease and 15 healthy controls. The early PD group was studied in two conditions: ON and OFF dopaminergic therapy. Results Results showed that the early PD patients performed more poorly in the ON condition than in the OFF one, for overall emotion recognition, as well as for the recognition of anger, disgust and fear. Additionally, for anger, the early PD ON patients performed more poorly than controls. For overall emotion recognition, both advanced PD patients and early PD ON patients performed more poorly than controls. Analysis of continuous ratings on target and nontarget visual analog scales confirmed these patterns of results, showing a systematic emotional bias in both the advanced PD and early PD ON (but not OFF) patients compared with controls. Conclusions These results i) confirm the involvement of the dopaminergic pathways and basal ganglia in emotional prosody recognition, and ii) suggest a possibly deleterious effect of dopatherapy on affective abilities in the early stages of PD. PMID:24651759
Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Functional architecture of visual emotion recognition ability: A latent variable approach.
Lewis, Gary J; Lefevre, Carmen E; Young, Andrew W
2016-05-01
Emotion recognition has been a focus of considerable attention for several decades. However, despite this interest, the underlying structure of individual differences in emotion recognition ability has been largely overlooked and thus is poorly understood. For example, limited knowledge exists concerning whether recognition ability for one emotion (e.g., disgust) generalizes to other emotions (e.g., anger, fear). Furthermore, it is unclear whether emotion recognition ability generalizes across modalities, such that those who are good at recognizing emotions from the face, for example, are also good at identifying emotions from nonfacial cues (such as cues conveyed via the body). The primary goal of the current set of studies was to address these questions through establishing the structure of individual differences in visual emotion recognition ability. In three independent samples (Study 1: n = 640; Study 2: n = 389; Study 3: n = 303), we observed that the ability to recognize visually presented emotions is based on different sources of variation: a supramodal emotion-general factor, supramodal emotion-specific factors, and face- and within-modality emotion-specific factors. In addition, we found evidence that general intelligence and alexithymia were associated with supramodal emotion recognition ability. Autism-like traits, empathic concern, and alexithymia were independently associated with face-specific emotion recognition ability. These results (a) provide a platform for further individual differences research on emotion recognition ability, (b) indicate that differentiating levels within the architecture of emotion recognition ability is of high importance, and (c) show that the capacity to understand expressions of emotion in others is linked to broader affective and cognitive processes. (c) 2016 APA, all rights reserved).
Emotion recognition from EEG using higher order crossings.
Petrantonakis, Panagiotis C; Hadjileontiadis, Leontios J
2010-03-01
Electroencephalogram (EEG)-based emotion recognition is a relatively new field in the affective computing area with challenging issues regarding the induction of the emotional states and the extraction of the features in order to achieve optimum classification performance. In this paper, a novel emotion evocation and EEG-based feature extraction technique is presented. In particular, the mirror neuron system concept was adapted to efficiently foster emotion induction by the process of imitation. In addition, higher order crossings (HOC) analysis was employed for the feature extraction scheme and a robust classification method, namely HOC-emotion classifier (HOC-EC), was implemented testing four different classifiers [quadratic discriminant analysis (QDA), k-nearest neighbor, Mahalanobis distance, and support vector machines (SVMs)], in order to accomplish efficient emotion recognition. Through a series of facial expression image projection, EEG data have been collected by 16 healthy subjects using only 3 EEG channels, namely Fp1, Fp2, and a bipolar channel of F3 and F4 positions according to 10-20 system. Two scenarios were examined using EEG data from a single-channel and from combined-channels, respectively. Compared with other feature extraction methods, HOC-EC appears to outperform them, achieving a 62.3% (using QDA) and 83.33% (using SVM) classification accuracy for the single-channel and combined-channel cases, respectively, differentiating among the six basic emotions, i.e., happiness, surprise, anger, fear, disgust, and sadness. As the emotion class-set reduces its dimension, the HOC-EC converges toward maximum classification rate (100% for five or less emotions), justifying the efficiency of the proposed approach. This could facilitate the integration of HOC-EC in human machine interfaces, such as pervasive healthcare systems, enhancing their affective character and providing information about the user's emotional status (e.g., identifying user's emotion experiences, recurring affective states, time-dependent emotional trends).
Emotion Recognition in Frontotemporal Dementia and Alzheimer's Disease: A New Film-Based Assessment
Goodkind, Madeleine S.; Sturm, Virginia E.; Ascher, Elizabeth A.; Shdo, Suzanne M.; Miller, Bruce L.; Rankin, Katherine P.; Levenson, Robert W.
2015-01-01
Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests. PMID:26010574
Labuschagne, Izelle; Jones, Rebecca; Callaghan, Jenny; Whitehead, Daisy; Dumas, Eve M; Say, Miranda J; Hart, Ellen P; Justo, Damian; Coleman, Allison; Dar Santos, Rachelle C; Frost, Chris; Craufurd, David; Tabrizi, Sarah J; Stout, Julie C
2013-05-15
Facial emotion recognition impairments have been reported in Huntington's disease (HD). However, the nature of the impairments across the spectrum of HD remains unclear. We report on emotion recognition data from 344 participants comprising premanifest HD (PreHD) and early HD patients, and controls. In a test of recognition of facial emotions, we examined responses to six basic emotional expressions and neutral expressions. In addition, and within the early HD sample, we tested for differences on emotion recognition performance between those 'on' vs. 'off' neuroleptic or selective serotonin reuptake inhibitor (SSRI) medications. The PreHD groups showed significant (p<0.05) impaired recognition, compared to controls, on fearful, angry and surprised faces; whereas the early HD groups were significantly impaired across all emotions including neutral expressions. In early HD, neuroleptic use was associated with worse facial emotion recognition, whereas SSRI use was associated with better facial emotion recognition. The findings suggest that emotion recognition impairments exist across the HD spectrum, but are relatively more widespread in manifest HD than in the premanifest period. Commonly prescribed medications to treat HD-related symptoms also appear to affect emotion recognition. These findings have important implications for interpersonal communication and medication usage in HD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Cognitive penetrability and emotion recognition in human facial expressions
Marchi, Francesco
2015-01-01
Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796
Dynorphins regulate the strength of social memory.
Bilkei-Gorzo, A; Mauer, D; Michel, K; Zimmer, A
2014-02-01
Emotionally arousing events like encounter with an unfamiliar con-species produce strong and vivid memories, whereby the hippocampus and amygdala play a crucial role. It is less understood, however, which neurotransmitter systems regulate the strength of social memories, which have a strong emotional component. It was shown previously that dynorphin signalling is involved in the formation and extinction of fear memories, therefore we asked if it influences social memories as well. Mice with a genetic deletion of the prodynorphin gene Pdyn (Pdyn(-/-)) showed a superior partner recognition ability, whereas their performance in the object recognition test was identical as in wild-type mice. Pharmacological blockade of kappa opioid receptors (KORs) led to an enhanced social memory in wild-type animals, whereas activation of KORs reduced the recognition ability of Pdyn(-/-) mice. Partner recognition test situation induced higher elevation in dynorphin A levels in the central and basolateral amygdala as well as in the hippocampus, and also higher dynorphin B levels in the hippocampus than the object recognition test situation. Our result suggests that dynorphin system activity is increased in emotionally arousing situation and it decreases the formation of social memories. Thus, dynorphin signalling is involved in the formation of social memories by diminishing the emotional component of the experience. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hargreaves, A; Mothersill, O; Anderson, M; Lawless, S; Corvin, A; Donohoe, G
2016-10-28
Deficits in facial emotion recognition have been associated with functional impairments in patients with Schizophrenia (SZ). Whilst a strong ecological argument has been made for the use of both dynamic facial expressions and varied emotion intensities in research, SZ emotion recognition studies to date have primarily used static stimuli of a singular, 100%, intensity of emotion. To address this issue, the present study aimed to investigate accuracy of emotion recognition amongst patients with SZ and healthy subjects using dynamic facial emotion stimuli of varying intensities. To this end an emotion recognition task (ERT) designed by Montagne (2007) was adapted and employed. 47 patients with a DSM-IV diagnosis of SZ and 51 healthy participants were assessed for emotion recognition. Results of the ERT were tested for correlation with performance in areas of cognitive ability typically found to be impaired in psychosis, including IQ, memory, attention and social cognition. Patients were found to perform less well than healthy participants at recognising each of the 6 emotions analysed. Surprisingly, however, groups did not differ in terms of impact of emotion intensity on recognition accuracy; for both groups higher intensity levels predicted greater accuracy, but no significant interaction between diagnosis and emotional intensity was found for any of the 6 emotions. Accuracy of emotion recognition was, however, more strongly correlated with cognition in the patient cohort. Whilst this study demonstrates the feasibility of using ecologically valid dynamic stimuli in the study of emotion recognition accuracy, varying the intensity of the emotion displayed was not demonstrated to impact patients and healthy participants differentially, and thus may not be a necessary variable to include in emotion recognition research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.
Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M
2014-11-01
Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
Speaker emotion recognition: from classical classifiers to deep neural networks
NASA Astrophysics Data System (ADS)
Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri
2018-04-01
Speaker emotion recognition is considered among the most challenging tasks in recent years. In fact, automatic systems for security, medicine or education can be improved when considering the speech affective state. In this paper, a twofold approach for speech emotion classification is proposed. At the first side, a relevant set of features is adopted, and then at the second one, numerous supervised training techniques, involving classic methods as well as deep learning, are experimented. Experimental results indicate that deep architecture can improve classification performance on two affective databases, the Berlin Dataset of Emotional Speech and the SAVEE Dataset Surrey Audio-Visual Expressed Emotion.
Emotion recognition in frontotemporal dementia and Alzheimer's disease: A new film-based assessment.
Goodkind, Madeleine S; Sturm, Virginia E; Ascher, Elizabeth A; Shdo, Suzanne M; Miller, Bruce L; Rankin, Katherine P; Levenson, Robert W
2015-08-01
Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests. (c) 2015 APA, all rights reserved).
Evaluating deep learning architectures for Speech Emotion Recognition.
Fayek, Haytham M; Lech, Margaret; Cavedon, Lawrence
2017-08-01
Speech Emotion Recognition (SER) can be regarded as a static or dynamic classification problem, which makes SER an excellent test bed for investigating and comparing various deep learning architectures. We describe a frame-based formulation to SER that relies on minimal speech processing and end-to-end deep learning to model intra-utterance dynamics. We use the proposed SER system to empirically explore feed-forward and recurrent neural network architectures and their variants. Experiments conducted illuminate the advantages and limitations of these architectures in paralinguistic speech recognition and emotion recognition in particular. As a result of our exploration, we report state-of-the-art results on the IEMOCAP database for speaker-independent SER and present quantitative and qualitative assessments of the models' performances. Copyright © 2017 Elsevier Ltd. All rights reserved.
Advanced Parkinson disease patients have impairment in prosody processing.
Albuquerque, Luisa; Martins, Maurício; Coelho, Miguel; Guedes, Leonor; Ferreira, Joaquim J; Rosa, Mário; Martins, Isabel Pavão
2016-01-01
The ability to recognize and interpret emotions in others is a crucial prerequisite of adequate social behavior. Impairments in emotion processing have been reported from the early stages of Parkinson's disease (PD). This study aims to characterize emotion recognition in advanced Parkinson's disease (APD) candidates for deep-brain stimulation and to compare emotion recognition abilities in visual and auditory domains. APD patients, defined as those with levodopa-induced motor complications (N = 42), and healthy controls (N = 43) matched by gender, age, and educational level, undertook the Comprehensive Affect Testing System (CATS), a battery that evaluates recognition of seven basic emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral) on facial expressions and four emotions on prosody (happiness, sadness, anger, and fear). APD patients were assessed during the "ON" state. Group performance was compared with independent-samples t tests. Compared to controls, APD had significantly lower scores on the discrimination and naming of emotions in prosody, and visual discrimination of neutral faces, but no significant differences in visual emotional tasks. The contrasting performance in emotional processing between visual and auditory stimuli suggests that APD candidates for surgery have either a selective difficulty in recognizing emotions in prosody or a general defect in prosody processing. Studies investigating early-stage PD, and the effect of subcortical lesions in prosody processing, favor the latter interpretation. Further research is needed to understand these deficits in emotional prosody recognition and their possible contribution to later behavioral or neuropsychiatric manifestations of PD.
Arousal Rather than Basic Emotions Influence Long-Term Recognition Memory in Humans
Marchewka, Artur; Wypych, Marek; Moslehi, Abnoos; Riegel, Monika; Michałowski, Jarosław M.; Jednoróg, Katarzyna
2016-01-01
Emotion can influence various cognitive processes, however its impact on memory has been traditionally studied over relatively short retention periods and in line with dimensional models of affect. The present study aimed to investigate emotional effects on long-term recognition memory according to a combined framework of affective dimensions and basic emotions. Images selected from the Nencki Affective Picture System were rated on the scale of affective dimensions and basic emotions. After 6 months, subjects took part in a surprise recognition test during an fMRI session. The more negative the pictures the better they were remembered, but also the more false recognitions they provoked. Similar effects were found for the arousal dimension. Recognition success was greater for pictures with lower intensity of happiness and with higher intensity of surprise, sadness, fear, and disgust. Consecutive fMRI analyses showed a significant activation for remembered (recognized) vs. forgotten (not recognized) images in anterior cingulate and bilateral anterior insula as well as in bilateral caudate nuclei and right thalamus. Further, arousal was found to be the only subjective rating significantly modulating brain activation. Higher subjective arousal evoked higher activation associated with memory recognition in the right caudate and the left cingulate gyrus. Notably, no significant modulation was observed for other subjective ratings, including basic emotion intensities. These results emphasize the crucial role of arousal for long-term recognition memory and support the hypothesis that the memorized material, over time, becomes stored in a distributed cortical network including the core salience network and basal ganglia. PMID:27818626
Arousal Rather than Basic Emotions Influence Long-Term Recognition Memory in Humans.
Marchewka, Artur; Wypych, Marek; Moslehi, Abnoos; Riegel, Monika; Michałowski, Jarosław M; Jednoróg, Katarzyna
2016-01-01
Emotion can influence various cognitive processes, however its impact on memory has been traditionally studied over relatively short retention periods and in line with dimensional models of affect. The present study aimed to investigate emotional effects on long-term recognition memory according to a combined framework of affective dimensions and basic emotions. Images selected from the Nencki Affective Picture System were rated on the scale of affective dimensions and basic emotions. After 6 months, subjects took part in a surprise recognition test during an fMRI session. The more negative the pictures the better they were remembered, but also the more false recognitions they provoked. Similar effects were found for the arousal dimension. Recognition success was greater for pictures with lower intensity of happiness and with higher intensity of surprise, sadness, fear, and disgust. Consecutive fMRI analyses showed a significant activation for remembered (recognized) vs. forgotten (not recognized) images in anterior cingulate and bilateral anterior insula as well as in bilateral caudate nuclei and right thalamus. Further, arousal was found to be the only subjective rating significantly modulating brain activation. Higher subjective arousal evoked higher activation associated with memory recognition in the right caudate and the left cingulate gyrus. Notably, no significant modulation was observed for other subjective ratings, including basic emotion intensities. These results emphasize the crucial role of arousal for long-term recognition memory and support the hypothesis that the memorized material, over time, becomes stored in a distributed cortical network including the core salience network and basal ganglia.
Does cortisol modulate emotion recognition and empathy?
Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja
2016-04-01
Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sully, K; Sonuga-Barke, E J S; Fairchild, G
2015-07-01
There is accumulating evidence of impairments in facial emotion recognition in adolescents with conduct disorder (CD). However, the majority of studies in this area have only been able to demonstrate an association, rather than a causal link, between emotion recognition deficits and CD. To move closer towards understanding the causal pathways linking emotion recognition problems with CD, we studied emotion recognition in the unaffected first-degree relatives of CD probands, as well as those with a diagnosis of CD. Using a family-based design, we investigated facial emotion recognition in probands with CD (n = 43), their unaffected relatives (n = 21), and healthy controls (n = 38). We used the Emotion Hexagon task, an alternative forced-choice task using morphed facial expressions depicting the six primary emotions, to assess facial emotion recognition accuracy. Relative to controls, the CD group showed impaired recognition of anger, fear, happiness, sadness and surprise (all p < 0.005). Similar to probands with CD, unaffected relatives showed deficits in anger and happiness recognition relative to controls (all p < 0.008), with a trend toward a deficit in fear recognition. There were no significant differences in performance between the CD probands and the unaffected relatives following correction for multiple comparisons. These results suggest that facial emotion recognition deficits are present in adolescents who are at increased familial risk for developing antisocial behaviour, as well as those who have already developed CD. Consequently, impaired emotion recognition appears to be a viable familial risk marker or candidate endophenotype for CD.
Exploring Cultural Differences in the Recognition of the Self-Conscious Emotions.
Chung, Joanne M; Robins, Richard W
2015-01-01
Recent research suggests that the self-conscious emotions of embarrassment, shame, and pride have distinct, nonverbal expressions that can be recognized in the United States at above-chance levels. However, few studies have examined the recognition of these emotions in other cultures, and little research has been conducted in Asia. Consequently the cross-cultural generalizability of self-conscious emotions has not been firmly established. Additionally, there is no research that examines cultural variability in the recognition of the self-conscious emotions. Cultural values and exposure to Western culture have been identified as contributors to variability in recognition rates for the basic emotions; we sought to examine this for the self-conscious emotions using the University of California, Davis Set of Emotion Expressions (UCDSEE). The present research examined recognition of the self-conscious emotion expressions in South Korean college students and found that recognition rates were very high for pride, low but above chance for shame, and near zero for embarrassment. To examine what might be underlying the recognition rates we found in South Korea, recognition of self-conscious emotions and several cultural values were examined in a U.S. college student sample of European Americans, Asian Americans, and Asian-born individuals. Emotion recognition rates were generally similar between the European Americans and Asian Americans, and higher than emotion recognition rates for Asian-born individuals. These differences were not explained by cultural values in an interpretable manner, suggesting that exposure to Western culture is a more important mediator than values.
Exploring Cultural Differences in the Recognition of the Self-Conscious Emotions
Chung, Joanne M.; Robins, Richard W.
2015-01-01
Recent research suggests that the self-conscious emotions of embarrassment, shame, and pride have distinct, nonverbal expressions that can be recognized in the United States at above-chance levels. However, few studies have examined the recognition of these emotions in other cultures, and little research has been conducted in Asia. Consequently the cross-cultural generalizability of self-conscious emotions has not been firmly established. Additionally, there is no research that examines cultural variability in the recognition of the self-conscious emotions. Cultural values and exposure to Western culture have been identified as contributors to variability in recognition rates for the basic emotions; we sought to examine this for the self-conscious emotions using the University of California, Davis Set of Emotion Expressions (UCDSEE). The present research examined recognition of the self-conscious emotion expressions in South Korean college students and found that recognition rates were very high for pride, low but above chance for shame, and near zero for embarrassment. To examine what might be underlying the recognition rates we found in South Korea, recognition of self-conscious emotions and several cultural values were examined in a U.S. college student sample of European Americans, Asian Americans, and Asian-born individuals. Emotion recognition rates were generally similar between the European Americans and Asian Americans, and higher than emotion recognition rates for Asian-born individuals. These differences were not explained by cultural values in an interpretable manner, suggesting that exposure to Western culture is a more important mediator than values. PMID:26309215
Employing Textual and Facial Emotion Recognition to Design an Affective Tutoring System
ERIC Educational Resources Information Center
Lin, Hao-Chiang Koong; Wang, Cheng-Hung; Chao, Ching-Ju; Chien, Ming-Kuan
2012-01-01
Emotional expression in Artificial Intelligence has gained lots of attention in recent years, people applied its affective computing not only in enhancing and realizing the interaction between computers and human, it also makes computer more humane. In this study, emotional expressions were applied into intelligent tutoring system, where learners'…
Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.
Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W
2006-08-01
Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.
Recognition of facial emotions in neuropsychiatric disorders.
Kohler, Christian G; Turner, Travis H; Gur, Raquel E; Gur, Ruben C
2004-04-01
Recognition of facial emotions represents an important aspect of interpersonal communication and is governed by select neural substrates. We present data on emotion recognition in healthy young adults utilizing a novel set of color photographs of evoked universal emotions. In addition, we review the recent literature on emotion recognition in psychiatric and neurologic disorders, and studies that compare different disorders.
Model of Emotional Expressions in Movements
ERIC Educational Resources Information Center
Rozaliev, Vladimir L.; Orlova, Yulia A.
2013-01-01
This paper presents a new approach to automated identification of human emotions based on analysis of body movements, a recognition of gestures and poses. Methodology, models and automated system for emotion identification are considered. To characterize the person emotions in the model, body movements are described with linguistic variables and a…
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
[Impact of facial emotional recognition alterations in Dementia of the Alzheimer type].
Rubinstein, Wanda; Cossini, Florencia; Politis, Daniel
2016-07-01
Face recognition of basic emotions is independent of other deficits in dementia of the Alzheimer type. Among these deficits, there is disagreement about what emotions are more difficult to recognize. Our aim was to study the presence of alterations in the process of facial recognition of basic emotions, and to investigate if there were differences in the recognition of each type of emotion in Alzheimer's disease. With three tests of recognition of basic facial emotions we evaluated 29 patients who had been diagnosed with dementia of the Alzheimer type and 18 control subjects. Significant differences were obtained in tests of recognition of basic facial emotions and between each. Since the amygdala, one of the brain structures responsible for emotional reaction, is affected in the early stages of this disease, our findings become relevant to understand how this alteration of the process of emotional recognition impacts the difficulties these patients have in both interpersonal relations and behavioral disorders.
The effect of emotion on keystroke: an experimental study using facial feedback hypothesis.
Tsui, Wei-Hsuan; Lee, Poming; Hsiao, Tzu-Chien
2013-01-01
The automatic emotion recognition technology is an important part of building intelligent systems to prevent the computers acting inappropriately. A novel approach for recognizing emotional state by their keystroke typing patterns on a standard keyboard was developed in recent years. However, there was very limited investigation about the phenomenon itself in the previous literatures. Hence, in our study, we conduct a controlled experiment to collect subjects' keystroke data in the different emotional states induced by facial feedback. We examine the difference of the keystroke data between positive and negative emotional states. The results prove the significance in the differences in the typing patterns under positive and negative emotions for all subjects. Our study provides an evidence for the reasonability about developing the technique of emotion recognition by keystroke.
Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.
Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi
2012-12-01
We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Age-related differences in emotion recognition ability: a cross-sectional study.
Mill, Aire; Allik, Jüri; Realo, Anu; Valk, Raivo
2009-10-01
Experimental studies indicate that recognition of emotions, particularly negative emotions, decreases with age. However, there is no consensus at which age the decrease in emotion recognition begins, how selective this is to negative emotions, and whether this applies to both facial and vocal expression. In the current cross-sectional study, 607 participants ranging in age from 18 to 84 years (mean age = 32.6 +/- 14.9 years) were asked to recognize emotions expressed either facially or vocally. In general, older participants were found to be less accurate at recognizing emotions, with the most distinctive age difference pertaining to a certain group of negative emotions. Both modalities revealed an age-related decline in the recognition of sadness and -- to a lesser degree -- anger, starting at about 30 years of age. Although age-related differences in the recognition of expression of emotion were not mediated by personality traits, 2 of the Big 5 traits, openness and conscientiousness, made an independent contribution to emotion-recognition performance. Implications of age-related differences in facial and vocal emotion expression and early onset of the selective decrease in emotion recognition are discussed in terms of previous findings and relevant theoretical models.
Reyes, B Nicole; Segal, Shira C; Moulson, Margaret C
2018-01-01
Emotion recognition is important for social interaction and communication, yet previous research has identified a cross-cultural emotion recognition deficit: Recognition is less accurate for emotions expressed by individuals from a cultural group different than one's own. The current study examined whether social categorization based on race, in the absence of cultural differences, influences emotion recognition in a diverse context. South Asian and White Canadians in the Greater Toronto Area completed an emotion recognition task that required them to identify the seven basic emotional expressions when posed by members of the same two groups, allowing us to tease apart the contributions of culture and social group membership. Contrary to our hypothesis, there was no mutual in-group advantage in emotion recognition: Participants were not more accurate at recognizing emotions posed by their respective racial in-groups. Both groups were more accurate at recognizing expressions when posed by South Asian faces, and White participants were more accurate overall compared to South Asian participants. These results suggest that in a diverse environment, categorization based on race alone does not lead to the creation of social out-groups in a way that negatively impacts emotion recognition.
An investigation of the effect of race-based social categorization on adults’ recognition of emotion
Reyes, B. Nicole; Segal, Shira C.
2018-01-01
Emotion recognition is important for social interaction and communication, yet previous research has identified a cross-cultural emotion recognition deficit: Recognition is less accurate for emotions expressed by individuals from a cultural group different than one’s own. The current study examined whether social categorization based on race, in the absence of cultural differences, influences emotion recognition in a diverse context. South Asian and White Canadians in the Greater Toronto Area completed an emotion recognition task that required them to identify the seven basic emotional expressions when posed by members of the same two groups, allowing us to tease apart the contributions of culture and social group membership. Contrary to our hypothesis, there was no mutual in-group advantage in emotion recognition: Participants were not more accurate at recognizing emotions posed by their respective racial in-groups. Both groups were more accurate at recognizing expressions when posed by South Asian faces, and White participants were more accurate overall compared to South Asian participants. These results suggest that in a diverse environment, categorization based on race alone does not lead to the creation of social out-groups in a way that negatively impacts emotion recognition. PMID:29474367
A multimodal approach to emotion recognition ability in autism spectrum disorders.
Jones, Catherine R G; Pickles, Andrew; Falcaro, Milena; Marsden, Anita J S; Happé, Francesca; Scott, Sophie K; Sauter, Disa; Tregay, Jenifer; Phillips, Rebecca J; Baird, Gillian; Simonoff, Emily; Charman, Tony
2011-03-01
Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal, hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (≥ 80 vs. < 80). We found no evidence of a fundamental emotion recognition deficit in the ASD group and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD. © 2010 The Authors. Journal of Child Psychology and Psychiatry © 2010 Association for Child and Adolescent Mental Health.
Chen, Jing; Hu, Bin; Wang, Yue; Moore, Philip; Dai, Yongqiang; Feng, Lei; Ding, Zhijie
2017-12-20
Collaboration between humans and computers has become pervasive and ubiquitous, however current computer systems are limited in that they fail to address the emotional component. An accurate understanding of human emotions is necessary for these computers to trigger proper feedback. Among multiple emotional channels, physiological signals are synchronous with emotional responses; therefore, analyzing physiological changes is a recognized way to estimate human emotions. In this paper, a three-stage decision method is proposed to recognize four emotions based on physiological signals in the multi-subject context. Emotion detection is achieved by using a stage-divided strategy in which each stage deals with a fine-grained goal. The decision method consists of three stages. During the training process, the initial stage transforms mixed training subjects to separate groups, thus eliminating the effect of individual differences. The second stage categorizes four emotions into two emotion pools in order to reduce recognition complexity. The third stage trains a classifier based on emotions in each emotion pool. During the testing process, a test case or test trial will be initially classified to a group followed by classification into an emotion pool in the second stage. An emotion will be assigned to the test trial in the final stage. In this paper we consider two different ways of allocating four emotions into two emotion pools. A comparative analysis is also carried out between the proposal and other methods. An average recognition accuracy of 77.57% was achieved on the recognition of four emotions with the best accuracy of 86.67% to recognize the positive and excited emotion. Using differing ways of allocating four emotions into two emotion pools, we found there is a difference in the effectiveness of a classifier on learning each emotion. When compared to other methods, the proposed method demonstrates a significant improvement in recognizing four emotions in the multi-subject context. The proposed three-stage decision method solves a crucial issue which is 'individual differences' in multi-subject emotion recognition and overcomes the suboptimal performance with respect to direct classification of multiple emotions. Our study supports the observation that the proposed method represents a promising methodology for recognizing multiple emotions in the multi-subject context.
Berzenski, Sara R; Yates, Tuppett M
2017-10-01
The ability to recognize and label emotions serves as a building block by which children make sense of the world and learn how to interact with social partners. However, the timing and salience of influences on emotion recognition development are not fully understood. Path analyses evaluated the contributions of parenting and child narrative coherence to the development of emotion recognition across ages 4 through 8 in a diverse (50% female; 46% Hispanic, 18.4% Black, 11.2% White, .4% Asian, 24.0% multiracial) longitudinally followed sample of 250 caregiver-child dyads. Parenting behaviors during interactions (i.e., support, instructional quality, intrusiveness, and hostility) and children's narrative coherence during the MacArthur Story Stem Battery were observed at ages 4 and 6. Emotion recognition increased from age 4 to 8. Parents' supportive presence at age 4 and instructional quality at age 6 predicted increased emotion recognition at 8, beyond initial levels of emotion recognition and child cognitive ability. There were no significant effects of negative parenting (i.e., intrusiveness or hostility) at 4 or 6 on emotion recognition. Child narrative coherence at ages 4 and 6 predicted increased emotion recognition at 8. Emotion recognition at age 4 predicted increased parent instructional quality and decreased intrusiveness at 6. These findings clarify whether and when familial and child factors influence emotion recognition development. Influences on emotion recognition development emerged as differentially salient across time periods, such that there is a need to develop and implement targeted interventions to promote positive parenting skills and children's narrative coherence at specific ages. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Human and animal sounds influence recognition of body language.
Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice
2008-11-25
In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.
Stereotypes and prejudice affect the recognition of emotional body postures.
Bijlstra, Gijsbert; Holland, Rob W; Dotsch, Ron; Wigboldus, Daniel H J
2018-03-26
Most research on emotion recognition focuses on facial expressions. However, people communicate emotional information through bodily cues as well. Prior research on facial expressions has demonstrated that emotion recognition is modulated by top-down processes. Here, we tested whether this top-down modulation generalizes to the recognition of emotions from body postures. We report three studies demonstrating that stereotypes and prejudice about men and women may affect how fast people classify various emotional body postures. Our results suggest that gender cues activate gender associations, which affect the recognition of emotions from body postures in a top-down fashion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Farsham, Aida; Abbaslou, Tahereh; Bidaki, Reza; Bozorg, Bonnie
2017-01-01
Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD) and schizotypal personality disorder (SPD). The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method: Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants. Discussion: The results of one way ANOVA and Scheffe’s post hoc test analysis revealed significant differences in neuropsychology assessment of facial emotional recognition between BPD and SPD patients with normal group (p = 0/001). A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008). A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(. The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain. PMID:28659980
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
Fenske, Sabrina; Lis, Stefanie; Liebke, Lisa; Niedtfeld, Inga; Kirsch, Peter; Mier, Daniela
2015-01-01
Borderline Personality Disorder (BPD) is characterized by severe deficits in social interactions, which might be linked to deficits in emotion recognition. Research on emotion recognition abilities in BPD revealed heterogeneous results, ranging from deficits to heightened sensitivity. The most stable findings point to an impairment in the evaluation of neutral facial expressions as neutral, as well as to a negative bias in emotion recognition; that is the tendency to attribute negative emotions to neutral expressions, or in a broader sense to report a more negative emotion category than depicted. However, it remains unclear which contextual factors influence the occurrence of this negative bias. Previous studies suggest that priming by preceding emotional information and also constrained processing time might augment the emotion recognition deficit in BPD. To test these assumptions, 32 female BPD patients and 31 healthy females, matched for age and education, participated in an emotion recognition study, in which every facial expression was preceded by either a positive, neutral or negative scene. Furthermore, time constraints for processing were varied by presenting the facial expressions with short (100 ms) or long duration (up to 3000 ms) in two separate blocks. BPD patients showed a significant deficit in emotion recognition for neutral and positive facial expression, associated with a significant negative bias. In BPD patients, this emotion recognition deficit was differentially affected by preceding emotional information and time constraints, with a greater influence of emotional information during long face presentations and a greater influence of neutral information during short face presentations. Our results are in line with previous findings supporting the existence of a negative bias in emotion recognition in BPD patients, and provide further insights into biased social perceptions in BPD patients.
Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R
2011-11-01
Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.
Wolf, Richard C; Pujara, Maia; Baskaya, Mustafa K; Koenigs, Michael
2016-09-01
Facial emotion recognition is a critical aspect of human communication. Since abnormalities in facial emotion recognition are associated with social and affective impairment in a variety of psychiatric and neurological conditions, identifying the neural substrates and psychological processes underlying facial emotion recognition will help advance basic and translational research on social-affective function. Ventromedial prefrontal cortex (vmPFC) has recently been implicated in deploying visual attention to the eyes of emotional faces, although there is mixed evidence regarding the importance of this brain region for recognition accuracy. In the present study of neurological patients with vmPFC damage, we used an emotion recognition task with morphed facial expressions of varying intensities to determine (1) whether vmPFC is essential for emotion recognition accuracy, and (2) whether instructed attention to the eyes of faces would be sufficient to improve any accuracy deficits. We found that vmPFC lesion patients are impaired, relative to neurologically healthy adults, at recognizing moderate intensity expressions of anger and that recognition accuracy can be improved by providing instructions of where to fixate. These results suggest that vmPFC may be important for the recognition of facial emotion through a role in guiding visual attention to emotionally salient regions of faces. Copyright © 2016 Elsevier Ltd. All rights reserved.
Leist, Tatyana; Dadds, Mark R
2009-04-01
Emotional processing styles appear to characterize various forms of psychopathology and environmental adversity in children. For example, autistic, anxious, high- and low-emotion conduct problem children, and children who have been maltreated, all appear to show specific deficits and strengths in recognizing the facial expressions of emotions. Until now, the relationships between emotion recognition, antisocial behaviour, emotional problems, callous-unemotional (CU) traits and early maltreatment have never been assessed simultaneously in one study, and the specific associations of emotion recognition to maltreatment and child characteristics are therefore unknown. We examined facial-emotion processing in a sample of 23 adolescents selected for high-risk status on the variables of interest. As expected, maltreatment and child characteristics showed unique associations. CU traits were uniquely related to impairments in fear recognition. Antisocial behaviour was uniquely associated with better fear recognition, but impaired anger recognition. Emotional problems were associated with better recognition of anger and sadness, but lower recognition of neutral faces. Maltreatment was predictive of superior recognition of fear and sadness. The findings are considered in terms of social information-processing theories of psychopathology. Implications for clinical interventions are discussed.
Luebbe, Aaron M; Fussner, Lauren M; Kiel, Elizabeth J; Early, Martha C; Bell, Debora J
2013-12-01
Depressive symptomatology is associated with impaired recognition of emotion. Previous investigations have predominantly focused on emotion recognition of static facial expressions neglecting the influence of social interaction and critical contextual factors. In the current study, we investigated how youth and maternal symptoms of depression may be associated with emotion recognition biases during familial interactions across distinct contextual settings. Further, we explored if an individual's current emotional state may account for youth and maternal emotion recognition biases. Mother-adolescent dyads (N = 128) completed measures of depressive symptomatology and participated in three family interactions, each designed to elicit distinct emotions. Mothers and youth completed state affect ratings pertaining to self and other at the conclusion of each interaction task. Using multiple regression, depressive symptoms in both mothers and adolescents were associated with biased recognition of both positive affect (i.e., happy, excited) and negative affect (i.e., sadness, anger, frustration); however, this bias emerged primarily in contexts with a less strong emotional signal. Using actor-partner interdependence models, results suggested that youth's own state affect accounted for depression-related biases in their recognition of maternal affect. State affect did not function similarly in explaining depression-related biases for maternal recognition of adolescent emotion. Together these findings suggest a similar negative bias in emotion recognition associated with depressive symptoms in both adolescents and mothers in real-life situations, albeit potentially driven by different mechanisms.
Improving Negative Emotion Recognition in Young Offenders Reduces Subsequent Crime
Hubble, Kelly; Bowen, Katharine L.; Moore, Simon C.; van Goozen, Stephanie H. M.
2015-01-01
Background Children with antisocial behaviour show deficits in the perception of emotional expressions in others that may contribute to the development and persistence of antisocial and aggressive behaviour. Current treatments for antisocial youngsters are limited in effectiveness. It has been argued that more attention should be devoted to interventions that target neuropsychological correlates of antisocial behaviour. This study examined the effect of emotion recognition training on criminal behaviour. Methods Emotion recognition and crime levels were studied in 50 juvenile offenders. Whilst all young offenders received their statutory interventions as the study was conducted, a subgroup of twenty-four offenders also took part in a facial affect training aimed at improving emotion recognition. Offenders in the training and control groups were matched for age, SES, IQ and lifetime crime level. All offenders were tested twice for emotion recognition performance, and recent crime data were collected after the testing had been completed. Results Before the training there were no differences between the groups in emotion recognition, with both groups displaying poor fear, sadness and anger recognition. After the training fear, sadness and anger recognition improved significantly in juvenile offenders in the training group. Although crime rates dropped in all offenders in the 6 months following emotion testing, only the group of offenders who had received the emotion training showed a significant reduction in the severity of the crimes they committed. Conclusions The study indicates that emotion recognition can be relatively easily improved in youths who engage in serious antisocial and criminal behavior. The results suggest that improved emotion recognition has the potential to reduce the severity of reoffending. PMID:26121148
Improving Negative Emotion Recognition in Young Offenders Reduces Subsequent Crime.
Hubble, Kelly; Bowen, Katharine L; Moore, Simon C; van Goozen, Stephanie H M
2015-01-01
Children with antisocial behaviour show deficits in the perception of emotional expressions in others that may contribute to the development and persistence of antisocial and aggressive behaviour. Current treatments for antisocial youngsters are limited in effectiveness. It has been argued that more attention should be devoted to interventions that target neuropsychological correlates of antisocial behaviour. This study examined the effect of emotion recognition training on criminal behaviour. Emotion recognition and crime levels were studied in 50 juvenile offenders. Whilst all young offenders received their statutory interventions as the study was conducted, a subgroup of twenty-four offenders also took part in a facial affect training aimed at improving emotion recognition. Offenders in the training and control groups were matched for age, SES, IQ and lifetime crime level. All offenders were tested twice for emotion recognition performance, and recent crime data were collected after the testing had been completed. Before the training there were no differences between the groups in emotion recognition, with both groups displaying poor fear, sadness and anger recognition. After the training fear, sadness and anger recognition improved significantly in juvenile offenders in the training group. Although crime rates dropped in all offenders in the 6 months following emotion testing, only the group of offenders who had received the emotion training showed a significant reduction in the severity of the crimes they committed. The study indicates that emotion recognition can be relatively easily improved in youths who engage in serious antisocial and criminal behavior. The results suggest that improved emotion recognition has the potential to reduce the severity of reoffending.
Impaired recognition of happy facial expressions in bipolar disorder.
Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M
2014-08-01
The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.
Emotion recognition pattern in adolescent boys with attention-deficit/hyperactivity disorder.
Aspan, Nikoletta; Bozsik, Csilla; Gadoros, Julia; Nagy, Peter; Inantsy-Pap, Judit; Vida, Peter; Halasz, Jozsef
2014-01-01
Social and emotional deficits were recently considered as inherent features of individuals with attention-deficit hyperactivity disorder (ADHD), but only sporadic literature data exist on emotion recognition in adolescents with ADHD. The aim of the present study was to establish emotion recognition profile in adolescent boys with ADHD in comparison with control adolescents. Forty-four adolescent boys (13-16 years) participated in the study after informed consent; 22 boys had a clinical diagnosis of ADHD, while data were also assessed from 22 adolescent control boys matched for age and Raven IQ. Parent- and self-reported behavioral characteristics were assessed by the means of the Strengths and Difficulties Questionnaire. The recognition of six basic emotions was evaluated by the "Facial Expressions of Emotion-Stimuli and Tests." Compared to controls, adolescents with ADHD were more sensitive in the recognition of disgust and, worse in the recognition of fear and showed a tendency for impaired recognition of sadness. Hyperactivity measures showed an inverse correlation with fear recognition. Our data suggest that adolescent boys with ADHD have alterations in the recognition of specific emotions.
Thonse, Umesh; Behere, Rishikesh V; Praharaj, Samir Kumar; Sharma, Podila Sathya Venkata Narasimha
2018-06-01
Facial emotion recognition deficits have been consistently demonstrated in patients with severe mental disorders. Expressed emotion is found to be an important predictor of relapse. However, the relationship between facial emotion recognition abilities and expressed emotions and its influence on socio-occupational functioning in schizophrenia versus bipolar disorder has not been studied. In this study we examined 91 patients with schizophrenia and 71 with bipolar disorder for psychopathology, socio occupational functioning and emotion recognition abilities. Primary caregivers of 62 patients with schizophrenia and 49 with bipolar disorder were assessed on Family Attitude Questionnaire to assess their expressed emotions. Patients of schizophrenia and bipolar disorder performed similarly on the emotion recognition task. Patients with schizophrenia group experienced higher critical comments and had a poorer socio-occupational functioning as compared to patients with bipolar disorder. Poorer socio-occupational functioning in patients with schizophrenia was significantly associated with greater dissatisfaction in their caregivers. In patients with bipolar disorder, poorer emotion recognition scores significantly correlated with poorer adaptive living skills and greater hostility and dissatisfaction in their caregivers. The findings of our study suggest that emotion recognition abilities in patients with bipolar disorder are associated with negative expressed emotions leading to problems in adaptive living skills. Copyright © 2018 Elsevier B.V. All rights reserved.
Social approach and emotion recognition in fragile X syndrome.
Williams, Tracey A; Porter, Melanie A; Langdon, Robyn
2014-03-01
Evidence is emerging that individuals with Fragile X syndrome (FXS) display emotion recognition deficits, which may contribute to their significant social difficulties. The current study investigated the emotion recognition abilities, and social approachability judgments, of FXS individuals when processing emotional stimuli. Relative to chronological age- (CA-) and mental age- (MA-) matched controls, the FXS group performed significantly more poorly on the emotion recognition tasks, and displayed a bias towards detecting negative emotions. Moreover, after controlling for emotion recognition deficits, the FXS group displayed significantly reduced ratings of social approachability. These findings suggest that a social anxiety pattern, rather than poor socioemotional processing, may best explain the social avoidance observed in FXS.
EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY
Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.
2014-01-01
People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896
Comparison of emotion recognition from facial expression and music.
Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija
2011-01-01
The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.
Scotland, Jennifer L; McKenzie, Karen; Cossar, Jill; Murray, Aja; Michie, Amanda
2016-01-01
This study aimed to evaluate the emotion recognition abilities of adults (n=23) with an intellectual disability (ID) compared with a control group of children (n=23) without ID matched for estimated cognitive ability. The study examined the impact of: task paradigm, stimulus type and preferred processing style (global/local) on accuracy. We found that, after controlling for estimated cognitive ability, the control group performed significantly better than the individuals with ID. This provides some support for the emotion specificity hypothesis. Having a more local processing style did not significantly mediate the relation between having ID and emotion recognition, but did significantly predict emotion recognition ability after controlling for group. This suggests that processing style is related to emotion recognition independently of having ID. The availability of contextual information improved emotion recognition for people with ID when compared with line drawing stimuli, and identifying a target emotion from a choice of two was relatively easier for individuals with ID, compared with the other task paradigms. The results of the study are considered in the context of current theories of emotion recognition deficits in individuals with ID. Copyright © 2015 Elsevier Ltd. All rights reserved.
Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?
Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James
2017-01-01
Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives.
Younger and Older Users’ Recognition of Virtual Agent Facial Expressions
Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.
2015-01-01
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible explanation for age-related differences in emotion recognition. First, our findings show age-related differences in the recognition of emotions expressed by a virtual agent, with older adults showing lower recognition for the emotions of anger, disgust, fear, happiness, sadness, and neutral. These age-related difference might be explained by older adults having difficulty discriminating similarity in configural arrangement of facial features for certain emotions; for example, older adults often mislabeled the similar emotions of fear as surprise. Second, our results did not provide evidence for the dynamic formation improving emotion recognition; but, in general, the intensity of the emotion improved recognition. Lastly, we learned that emotion recognition, for older and younger adults, differed by character type, from best to worst: human, synthetic human, and then iCat. Our findings provide guidance for design, as well as the development of a framework of age-related differences in emotion recognition. PMID:25705105
Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240
Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Can emotion recognition be taught to children with autism spectrum conditions?
Baron-Cohen, Simon; Golan, Ofer; Ashwin, Emma
2009-01-01
Children with autism spectrum conditions (ASC) have major difficulties in recognizing and responding to emotional and mental states in others' facial expressions. Such difficulties in empathy underlie their social-communication difficulties that form a core of the diagnosis. In this paper we ask whether aspects of empathy can be taught to young children with ASC. We review a study that evaluated The Transporters, an animated series designed to enhance emotion comprehension in children with ASC. Children with ASC (4–7 years old) watched The Transporters every day for four weeks. Participants were tested before and after intervention on emotional vocabulary and emotion recognition at three levels of generalization. The intervention group improved significantly more than a clinical control group on all task levels, performing comparably to typical controls at time 2. The discussion centres on how vehicles as mechanical systems may be one key reason why The Transporters caused the improved understanding and recognition of emotions in children with ASC. The implications for the design of autism-friendly interventions are also explored. PMID:19884151
Emotion recognition and social adjustment in school-aged girls and boys.
Leppänen, J M; Hietanen, J K
2001-12-01
The present study investigated emotion recognition accuracy and its relation to social adjustment in 7-10 year-old children. The ability to recognize basic emotions from facial and vocal expressions was measured and compared to peer popularity and to teacher-rated social competence. The results showed that emotion recognition was related to these measures of social adjustment, but the gender of a child and emotion category affected this relationship. Emotion recognition accuracy was significantly related to social adjustment for the girls, but not for the boys. For the girls, especially the recognition of surprise was related to social adjustment. Together, these results suggest that the ability to recognize others' emotional states from nonverbal cues is an important socio-cognitive ability for school-aged girls.
Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long
2013-10-30
This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle
2017-01-01
To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.
Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L
2017-03-01
Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).
Halász, József; Áspán, Nikoletta; Bozsik, Csilla; Gádoros, Júlia; Inántsy-Pap, Judit
2013-01-01
In adult individuals with antisocial personality disorder, impairment in the recognition of fear seems established. In adolescents with conduct disorder (antecedent of antisocial personality disorder), only sporadic data were assessed, but literature data indicate alterations in the recognition of emotions. The aim of the present study was to assess the relationship between emotion recognition and conduct symptoms in non-clinical adolescents. 53 adolescents participated in the study (13-16 years, boys, n=29, age 14.7±0.2 years; girls, n=24, age=14.7±0.2 years) after informed consent. The parent version of the Strengths and Difficulties Questionnaire was used to assess behavioral problems. The recognition of six basic emotions was established by the "Facial expressions of emotion-stimuli and tests", while Raven IQ measures were also performed. Compared to boys, girls showed significantly better performance in the recognition of disgust (p<0.035), while no significant difference occurred in the recognition of other emotions. In boys, Conduct Problems score was inversely correlated with the recognition of fear (Spearman R=-0.40, p<0.031) and overall emotion recognition (Spearman R=-0.44, p<0.015), while similar correlation was not present in girls. The relationship between the recognition of emotions and conduct problems might indicate an important mechanism in the development of antisocial behavior.
Survey of Technologies for the Airport Border of the Future
2014-04-01
geometry Handwriting recognition ID cards Image classification Image enhancement Image fusion Image matching Image processing Image segmentation Iris...00 Tongue print Footstep recognition Odour recognition Retinal recognition Emotion recognition Periocular recognition Handwriting recognition Ear...recognition Palmprint recognition Hand geometry DNA matching Vein matching Ear recognition Handwriting recognition Periocular recognition Emotion
Neves, Maila de Castro Lourenço das; Tremeau, Fabien; Nicolato, Rodrigo; Lauar, Hélio; Romano-Silva, Marco Aurélio; Correa, Humberto
2011-09-01
A large body of evidence suggests that several aspects of face processing are impaired in autism and that this impairment might be hereditary. This study was aimed at assessing facial emotion recognition in parents of children with autism and its associations with a functional polymorphism of the serotonin transporter (5HTTLPR). We evaluated 40 parents of children with autism and 41 healthy controls. All participants were administered the Penn Emotion Recognition Test (ER40) and were genotyped for 5HTTLPR. Our study showed that parents of children with autism performed worse in the facial emotion recognition test than controls. Analyses of error patterns showed that parents of children with autism over-attributed neutral to emotional faces. We found evidence that 5HTTLPR polymorphism did not influence the performance in the Penn Emotion Recognition Test, but that it may determine different error patterns. Facial emotion recognition deficits are more common in first-degree relatives of autistic patients than in the general population, suggesting that facial emotion recognition is a candidate endophenotype for autism.
Bora, Emre; Velakoulis, Dennis; Walterfang, Mark
2016-07-01
Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD). The aim of this meta-analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta-analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d = 1.81) and AD (d = 1.23). In bvFTD, recognition of negative emotions, especially anger (d = 1.48) and disgust (d = 1.41), were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social-cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD. © The Author(s) 2016.
Percinel, Ipek; Ozbaran, Burcu; Kose, Sezen; Simsek, Damla Goksen; Darcan, Sukran
2018-03-01
In this study we aimed to evaluate emotion recognition and emotion regulation skills of children with exogenous obesity between the ages of 11 and 18 years and compare them with healthy controls. The Schedule for Affective Disorders and Schizophrenia for School Aged Children was used for psychiatric evaluations. Emotion recognition skills were evaluated using Faces Test and Reading the Mind in the Eyes Test. The Difficulties in Emotions Regulation Scale was used for evaluating skills of emotion regulation. Children with obesity had lower scores on Faces Test and Reading the Mind in the Eyes Test, and experienced greater difficulty in emotional regulation skills. Improved understanding of emotional recognition and emotion regulation in young people with obesity may improve their social adaptation and help in the treatment of their disorder. To the best of our knowledge, this is the first study to evaluate both emotional recognition and emotion regulation functions in obese children and obese adolescents between 11 and 18 years of age.
Food-Induced Emotional Resonance Improves Emotion Recognition.
Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia
2016-01-01
The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce-which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one.
Food-Induced Emotional Resonance Improves Emotion Recognition
Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia
2016-01-01
The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce—which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one. PMID:27973559
Ruocco, Anthony C.; Reilly, James L.; Rubin, Leah H.; Daros, Alex R.; Gershon, Elliot S.; Tamminga, Carol A.; Pearlson, Godfrey D.; Hill, S. Kristian; Keshavan, Matcheri S.; Gur, Ruben C.; Sweeney, John A.
2014-01-01
Background Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. Objective The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Method Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Results Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Conclusions Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. PMID:25052782
Ruocco, Anthony C; Reilly, James L; Rubin, Leah H; Daros, Alex R; Gershon, Elliot S; Tamminga, Carol A; Pearlson, Godfrey D; Hill, S Kristian; Keshavan, Matcheri S; Gur, Ruben C; Sweeney, John A
2014-09-01
Difficulty recognizing facial emotions is an important social-cognitive deficit associated with psychotic disorders. It also may reflect a familial risk for psychosis in schizophrenia-spectrum disorders and bipolar disorder. The objectives of this study from the Bipolar-Schizophrenia Network on Intermediate Phenotypes (B-SNIP) consortium were to: 1) compare emotion recognition deficits in schizophrenia, schizoaffective disorder and bipolar disorder with psychosis, 2) determine the familiality of emotion recognition deficits across these disorders, and 3) evaluate emotion recognition deficits in nonpsychotic relatives with and without elevated Cluster A and Cluster B personality disorder traits. Participants included probands with schizophrenia (n=297), schizoaffective disorder (depressed type, n=61; bipolar type, n=69), bipolar disorder with psychosis (n=248), their first-degree relatives (n=332, n=69, n=154, and n=286, respectively) and healthy controls (n=380). All participants completed the Penn Emotion Recognition Test, a standardized measure of facial emotion recognition assessing four basic emotions (happiness, sadness, anger and fear) and neutral expressions (no emotion). Compared to controls, emotion recognition deficits among probands increased progressively from bipolar disorder to schizoaffective disorder to schizophrenia. Proband and relative groups showed similar deficits perceiving angry and neutral faces, whereas deficits on fearful, happy and sad faces were primarily isolated to schizophrenia probands. Even non-psychotic relatives without elevated Cluster A or Cluster B personality disorder traits showed deficits on neutral and angry faces. Emotion recognition ability was moderately familial only in schizophrenia families. Emotion recognition deficits are prominent but somewhat different across psychotic disorders. These deficits are reflected to a lesser extent in relatives, particularly on angry and neutral faces. Deficits were evident in non-psychotic relatives even without elevated personality disorder traits. Deficits in facial emotion recognition may reflect an important social-cognitive deficit in patients with psychotic disorders. Copyright © 2014 Elsevier B.V. All rights reserved.
Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk
2017-07-01
This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.
Emotion Recognition Pattern in Adolescent Boys with Attention-Deficit/Hyperactivity Disorder
Bozsik, Csilla; Gadoros, Julia; Inantsy-Pap, Judit
2014-01-01
Background. Social and emotional deficits were recently considered as inherent features of individuals with attention-deficit hyperactivity disorder (ADHD), but only sporadic literature data exist on emotion recognition in adolescents with ADHD. The aim of the present study was to establish emotion recognition profile in adolescent boys with ADHD in comparison with control adolescents. Methods. Forty-four adolescent boys (13–16 years) participated in the study after informed consent; 22 boys had a clinical diagnosis of ADHD, while data were also assessed from 22 adolescent control boys matched for age and Raven IQ. Parent- and self-reported behavioral characteristics were assessed by the means of the Strengths and Difficulties Questionnaire. The recognition of six basic emotions was evaluated by the “Facial Expressions of Emotion-Stimuli and Tests.” Results. Compared to controls, adolescents with ADHD were more sensitive in the recognition of disgust and, worse in the recognition of fear and showed a tendency for impaired recognition of sadness. Hyperactivity measures showed an inverse correlation with fear recognition. Conclusion. Our data suggest that adolescent boys with ADHD have alterations in the recognition of specific emotions. PMID:25110694
Lin, Chia-Yao; Tien, Yi-Min; Huang, Jong-Tsun; Tsai, Chon-Haw; Hsu, Li-Chuan
2016-01-01
Because of dopaminergic neurodegeneration, patients with Parkinson's disease (PD) show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment 1 and identify gender in Experiment 2. In Experiment 1, PD patients demonstrated a recognition deficit for negative (sadness and anger) and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment 2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment 2 as alternative explanations for the results of Experiment 1. We concluded that patients' ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions.
Tien, Yi-Min; Huang, Jong-Tsun
2016-01-01
Because of dopaminergic neurodegeneration, patients with Parkinson's disease (PD) show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment 1 and identify gender in Experiment 2. In Experiment 1, PD patients demonstrated a recognition deficit for negative (sadness and anger) and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment 2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment 2 as alternative explanations for the results of Experiment 1. We concluded that patients' ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions. PMID:27555668
Parents’ Emotion-Related Beliefs, Behaviors, and Skills Predict Children's Recognition of Emotion
Castro, Vanessa L.; Halberstadt, Amy G.; Lozada, Fantasy T.; Craig, Ashley B.
2015-01-01
Children who are able to recognize others’ emotions are successful in a variety of socioemotional domains, yet we know little about how school-aged children's abilities develop, particularly in the family context. We hypothesized that children develop emotion recognition skill as a function of parents’ own emotion-related beliefs, behaviors, and skills. We examined parents’ beliefs about the value of emotion and guidance of children's emotion, parents’ emotion labeling and teaching behaviors, and parents’ skill in recognizing children's emotions in relation to their school-aged children's emotion recognition skills. Sixty-nine parent-child dyads completed questionnaires, participated in dyadic laboratory tasks, and identified their own emotions and emotions felt by the other participant from videotaped segments. Regression analyses indicate that parents’ beliefs, behaviors, and skills together account for 37% of the variance in child emotion recognition ability, even after controlling for parent and child expressive clarity. The findings suggest the importance of the family milieu in the development of children's emotion recognition skill in middle childhood, and add to accumulating evidence suggesting important age-related shifts in the relation between parental emotion socialization and child emotional development. PMID:26005393
Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine
2013-02-01
Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. Copyright © 2011 Elsevier Ltd. All rights reserved.
Facial emotion recognition and borderline personality pathology.
Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio
2017-09-01
The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Social Approach and Emotion Recognition in Fragile X Syndrome
ERIC Educational Resources Information Center
Williams, Tracey A.; Porter, Melanie A.; Langdon, Robyn
2014-01-01
Evidence is emerging that individuals with Fragile X syndrome (FXS) display emotion recognition deficits, which may contribute to their significant social difficulties. The current study investigated the emotion recognition abilities, and social approachability judgments, of FXS individuals when processing emotional stimuli. Relative to…
Gender interactions in the recognition of emotions and conduct symptoms in adolescents.
Halász, József; Aspán, Nikoletta; Bozsik, Csilla; Gádoros, Júlia; Inántsy-Pap, Judit
2014-01-01
According to literature data, impairment in the recognition of emotions might be related to antisocial developmental pathway. In the present study, the relationship between gender-specific interaction of emotion recognition and conduct symptoms were studied in non-clinical adolescents. After informed consent, 29 boys and 24 girls (13-16 years, 14 ± 0.1 years) participated in the study. The parent version of the Strengths and Difficulties Questionnaire was used to assess behavioral problems. The recognition of basic emotions was analyzed according to both the gender of the participants and the gender of the stimulus faces via the "Facial Expressions of Emotion- Stimuli and Tests". Girls were significantly better than boys in the recognition of disgust, irrespective from the gender of the stimulus faces, albeit both genders were significantly better in the recognition of disgust in the case of male stimulus faces compared to female stimulus faces. Both boys and girls were significantly better in the recognition of sadness in the case of female stimulus faces compared to male stimulus faces. There was no gender effect (neither participant nor stimulus faces) in the recognition of other emotions. Conduct scores in boys were inversely correlated with the recognition of fear in male stimulus faces (R=-0.439, p<0.05) and with overall emotion recognition in male stimulus faces (R=-0.558, p<0.01). In girls, conduct scores were shown a tendency for positive correlation with disgust recognition in female stimulus faces (R=0.376, p<0.07). A gender-specific interaction between the recognition of emotions and antisocial developmentalpathway is suggested.
Multi-subject subspace alignment for non-stationary EEG-based emotion recognition.
Chai, Xin; Wang, Qisong; Zhao, Yongping; Liu, Xin; Liu, Dan; Bai, Ou
2018-01-01
Emotion recognition based on EEG signals is a critical component in Human-Machine collaborative environments and psychiatric health diagnoses. However, EEG patterns have been found to vary across subjects due to user fatigue, different electrode placements, and varying impedances, etc. This problem renders the performance of EEG-based emotion recognition highly specific to subjects, requiring time-consuming individual calibration sessions to adapt an emotion recognition system to new subjects. Recently, domain adaptation (DA) strategies have achieved a great deal success in dealing with inter-subject adaptation. However, most of them can only adapt one subject to another subject, which limits their applicability in real-world scenarios. To alleviate this issue, a novel unsupervised DA strategy called Multi-Subject Subspace Alignment (MSSA) is proposed in this paper, which takes advantage of subspace alignment solution and multi-subject information in a unified framework to build personalized models without user-specific labeled data. Experiments on a public EEG dataset known as SEED verify the effectiveness and superiority of MSSA over other state of the art methods for dealing with multi-subject scenarios.
Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.
Sex differences in facial emotion recognition across varying expression intensity levels from videos
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674
Li, Shijia; Weerda, Riklef; Guenzel, Friederike; Wolf, Oliver T; Thiel, Christiane M
2013-07-01
Previous studies have shown that acute psychosocial stress impairs retrieval of declarative memory with emotional material being especially sensitive to this effect. A functional deletion variant of the ADRA2B gene encoding the α2B-adrenergic receptor has been shown to increase emotional memory and neural activity in the amygdala. We investigated the effects of acute psychosocial stress and the ADRA2B allele on recognition memory for emotional and neutral faces. Fourty-two healthy, non-smoker male volunteers (30 deletion carriers, 12 noncarriers) were tested with a face recognition paradigm. During encoding they were presented with emotional and neutral faces. One hour later, participants underwent either a stress ("Trier Social Stress Test (TSST)") or a control procedure which was followed immediately by the retrieval session where subjects had to indicate whether the presented face was old or new. Stress increased salivary cortisol concentrations, blood pressure and pulse and impaired recognition memory for faces independent of emotional valence and genotype. Participants showed generally slower reaction times to emotional faces. Carriers of the ADRA2B functional deletion variant showed an impaired recognition and slower retrieval of neutral faces under stress. Further, they were significantly slower in retrieving fearful faces in the control condition. The findings indicate that a genetic variation of the noradrenergic system may preserve emotional faces from stress-induced memory impairments seen for neutral faces and heighten reactivity to emotional stimuli under control conditions. Copyright © 2013 Elsevier Inc. All rights reserved.
Öztürk, Ahmet; Kiliç, Alperen; Deveci, Erdem; Kirpinar, İsmet
2016-01-01
Background The concept of facial emotion recognition is well established in various neuropsychiatric disorders. Although emotional disturbances are strongly associated with somatoform disorders, there are a restricted number of studies that have investigated facial emotion recognition in somatoform disorders. Furthermore, there have been no studies that have regarded this issue using the new diagnostic criteria for somatoform disorders as somatic symptoms and related disorders (SSD). In this study, we aimed to compare the factors of facial emotion recognition between patients with SSD and age- and sex-matched healthy controls (HC) and to retest and investigate the factors of facial emotion recognition using the new criteria for SSD. Patients and methods After applying the inclusion and exclusion criteria, 54 patients who were diagnosed with SSD according to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria and 46 age- and sex-matched HC were selected to participate in the present study. Facial emotion recognition, alexithymia, and the status of anxiety and depression were compared between the groups. Results Patients with SSD had significantly decreased scores of facial emotion for fear faces, disgust faces, and neutral faces compared with age- and sex-matched HC (t=−2.88, P=0.005; t=−2.86, P=0.005; and t=−2.56, P=0.009, respectively). After eliminating the effects of alexithymia and depressive and anxious states, the groups were found to be similar in terms of their responses to facial emotion and mean reaction time to facial emotions. Discussion Although there have been limited numbers of studies that have examined the recognition of facial emotion in patients with somatoform disorders, our study is the first to investigate facial recognition in patients with SSD diagnosed according to the DSM-5 criteria. Recognition of facial emotion was found to be disturbed in patients with SSD. However, our findings suggest that disturbances in facial recognition were significantly associated with alexithymia and the status of depression and anxiety, which is consistent with the previous studies. Further studies are needed to highlight the associations between facial emotion recognition and SSD. PMID:27199559
Development and validation of an Argentine set of facial expressions of emotion.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
2017-02-01
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
2013-09-01
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.
Family environment influences emotion recognition following paediatric traumatic brain injury.
Schmidt, Adam T; Orsten, Kimberley D; Hanten, Gerri R; Li, Xiaoqi; Levin, Harvey S
2010-01-01
This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Findings suggest family functioning variables--especially financial resources--can influence performance on an emotional processing task following TBI in children.
Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander
2014-09-01
The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.
Rosenberg, Hannah; McDonald, Skye; Dethier, Marie; Kessels, Roy P C; Westbrook, R Frederick
2014-11-01
Many individuals who sustain moderate-severe traumatic brain injuries (TBI) are poor at recognizing emotional expressions, with a greater impairment in recognizing negative (e.g., fear, disgust, sadness, and anger) than positive emotions (e.g., happiness and surprise). It has been questioned whether this "valence effect" might be an artifact of the wide use of static facial emotion stimuli (usually full-blown expressions) which differ in difficulty rather than a real consequence of brain impairment. This study aimed to investigate the valence effect in TBI, while examining emotion recognition across different intensities (low, medium, and high). Twenty-seven individuals with TBI and 28 matched control participants were tested on the Emotion Recognition Task (ERT). The TBI group was more impaired in overall emotion recognition, and less accurate recognizing negative emotions. However, examining the performance across the different intensities indicated that this difference was driven by some emotions (e.g., happiness) being much easier to recognize than others (e.g., fear and surprise). Our findings indicate that individuals with TBI have an overall deficit in facial emotion recognition, and that both people with TBI and control participants found some emotions more difficult than others. These results suggest that conventional measures of facial affect recognition that do not examine variance in the difficulty of emotions may produce erroneous conclusions about differential impairment. They also cast doubt on the notion that dissociable neural pathways underlie the recognition of positive and negative emotions, which are differentially affected by TBI and potentially other neurological or psychiatric disorders.
Conduct symptoms and emotion recognition in adolescent boys with externalization problems.
Aspan, Nikoletta; Vida, Peter; Gadoros, Julia; Halasz, Jozsef
2013-01-01
In adults with antisocial personality disorder, marked alterations in the recognition of facial affect were described. Less consistent data are available on the emotion recognition in adolescents with externalization problems. The aim of the present study was to assess the relation between the recognition of emotions and conduct symptoms in adolescent boys with externalization problems. Adolescent boys with externalization problems referred to Vadaskert Child Psychiatry Hospital participated in the study after informed consent (N = 114, 11-17 years, mean = 13.4). The conduct problems scale of the strengths and difficulties questionnaire (parent and self-report) was used. The performance in a facial emotion recognition test was assessed. Conduct problems score (parent and self-report) was inversely correlated with the overall emotion recognition. In the self-report, conduct problems score was inversely correlated with the recognition of anger, fear, and sadness. Adolescents with high conduct problems scores were significantly worse in the recognition of fear, sadness, and overall recognition than adolescents with low conduct scores, irrespective of age and IQ. Our results suggest that impaired emotion recognition is dimensionally related to conduct problems and might have importance in the development of antisocial behavior.
Buratto, Luciano G.; Pottage, Claire L.; Brown, Charity; Morrison, Catriona M.; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated. PMID:25330251
Buratto, Luciano G; Pottage, Claire L; Brown, Charity; Morrison, Catriona M; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated.
Effectiveness of Emotion Recognition Training for Young Children with Developmental Delays
ERIC Educational Resources Information Center
Downs, Andrew; Strand, Paul
2008-01-01
Emotion recognition is a basic skill that is thought to facilitate development of social and emotional competence. There is little research available examining whether therapeutic or instructional interventions can improve the emotion recognition skill of young children with various developmental disabilities. Sixteen preschool children with…
Body Emotion Recognition Disproportionately Depends on Vertical Orientations during Childhood
ERIC Educational Resources Information Center
Balas, Benjamin; Auen, Amanda; Saville, Alyson; Schmidt, Jamie
2018-01-01
Children's ability to recognize emotional expressions from faces and bodies develops during childhood. However, the low-level features that support accurate body emotion recognition during development have not been well characterized. This is in marked contrast to facial emotion recognition, which is known to depend upon specific spatial frequency…
Facial and prosodic emotion recognition in social anxiety disorder.
Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei
2017-07-01
Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.
Chu, Simon; McNeill, Kimberley; Ireland, Jane L; Qurashi, Inti
2015-12-15
We investigated the relationship between a change in sleep quality and facial emotion recognition accuracy in a group of mentally-disordered inpatients at a secure forensic psychiatric unit. Patients whose sleep improved over time also showed improved facial emotion recognition while patients who showed no sleep improvement showed no change in emotion recognition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.
Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted
2017-07-01
Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
When Early Experiences Build a Wall to Others’ Emotions: An Electrophysiological and Autonomic Study
Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Sestito, Mariateresa; Ravera, Roberto; Gallese, Vittorio
2013-01-01
Facial expression of emotions is a powerful vehicle for communicating information about others’ emotional states and it normally induces facial mimicry in the observers. The aim of this study was to investigate if early aversive experiences could interfere with emotion recognition, facial mimicry, and with the autonomic regulation of social behaviors. We conducted a facial emotion recognition task in a group of “street-boys” and in an age-matched control group. We recorded facial electromyography (EMG), a marker of facial mimicry, and respiratory sinus arrhythmia (RSA), an index of the recruitment of autonomic system promoting social behaviors and predisposition, in response to the observation of facial expressions of emotions. Results showed an over-attribution of anger, and reduced EMG responses during the observation of both positive and negative expressions only among street-boys. Street-boys also showed lower RSA after observation of facial expressions and ineffective RSA suppression during presentation of non-threatening expressions. Our findings suggest that early aversive experiences alter not only emotion recognition but also facial mimicry of emotions. These deficits affect the autonomic regulation of social behaviors inducing lower social predisposition after the visualization of facial expressions and an ineffective recruitment of defensive behavior in response to non-threatening expressions. PMID:23593374
Oxytocin improves emotion recognition for older males.
Campbell, Anna; Ruffman, Ted; Murray, Janice E; Glue, Paul
2014-10-01
Older adults (≥60 years) perform worse than young adults (18-30 years) when recognizing facial expressions of emotion. The hypothesized cause of these changes might be declines in neurotransmitters that could affect information processing within the brain. In the present study, we examined the neuropeptide oxytocin that functions to increase neurotransmission. Research suggests that oxytocin benefits the emotion recognition of less socially able individuals. Men tend to have lower levels of oxytocin and older men tend to have worse emotion recognition than older women; therefore, there is reason to think that older men will be particularly likely to benefit from oxytocin. We examined this idea using a double-blind design, testing 68 older and 68 young adults randomly allocated to receive oxytocin nasal spray (20 international units) or placebo. Forty-five minutes afterward they completed an emotion recognition task assessing labeling accuracy for angry, disgusted, fearful, happy, neutral, and sad faces. Older males receiving oxytocin showed improved emotion recognition relative to those taking placebo. No differences were found for older females or young adults. We hypothesize that oxytocin facilitates emotion recognition by improving neurotransmission in the group with the worst emotion recognition. Copyright © 2014 Elsevier Inc. All rights reserved.
Major depressive disorder skews the recognition of emotional prosody.
Péron, Julie; El Tamer, Sarah; Grandjean, Didier; Leray, Emmanuelle; Travers, David; Drapier, Dominique; Vérin, Marc; Millet, Bruno
2011-06-01
Major depressive disorder (MDD) is associated with abnormalities in the recognition of emotional stimuli. MDD patients ascribe more negative emotion but also less positive emotion to facial expressions, suggesting blunted responsiveness to positive emotional stimuli. To ascertain whether these emotional biases are modality-specific, we examined the effects of MDD on the recognition of emotions from voices using a paradigm designed to capture subtle effects of biases. Twenty-one MDD patients and 21 healthy controls (HC) underwent clinical and neuropsychological assessments, followed by a paradigm featuring pseudowords spoken by actors in five types of emotional prosody, rated on continuous scales. Overall, MDD patients performed more poorly than HC, displaying significantly impaired recognition of fear, happiness and sadness. Compared with HC, they rated fear significantly more highly when listening to anger stimuli. They also displayed a bias toward surprise, rating it far higher when they heard sad or fearful utterances. Furthermore, for happiness stimuli, MDD patients gave higher ratings for negative emotions (fear and sadness). A multiple regression model on recognition of emotional prosody in MDD patients showed that the best fit was achieved using the executive functioning (categorical fluency, number of errors in the MCST, and TMT B-A) and the total score of the Montgomery-Asberg Depression Rating Scale. Impaired recognition of emotions would appear not to be specific to the visual modality but to be present also when emotions are expressed vocally, this impairment being related to depression severity and dysexecutive syndrome. MDD seems to skew the recognition of emotional prosody toward negative emotional stimuli and the blunting of positive emotion appears not to be restricted to the visual modality. Copyright © 2011 Elsevier Inc. All rights reserved.
Specific Impairments in the Recognition of Emotional Facial Expressions in Parkinson’s Disease
Clark, Uraina S.; Neargarder, Sandy; Cronin-Golomb, Alice
2008-01-01
Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD. PMID:18485422
McIntosh, Lindsey G; Mannava, Sishir; Camalier, Corrie R; Folley, Bradley S; Albritton, Aaron; Konrad, Peter E; Charles, David; Park, Sohee; Neimat, Joseph S
2014-01-01
Parkinson's disease (PD) is traditionally regarded as a neurodegenerative movement disorder, however, nigrostriatal dopaminergic degeneration is also thought to disrupt non-motor loops connecting basal ganglia to areas in frontal cortex involved in cognition and emotion processing. PD patients are impaired on tests of emotion recognition, but it is difficult to disentangle this deficit from the more general cognitive dysfunction that frequently accompanies disease progression. Testing for emotion recognition deficits early in the disease course, prior to cognitive decline, better assesses the sensitivity of these non-motor corticobasal ganglia-thalamocortical loops involved in emotion processing to early degenerative change in basal ganglia circuits. In addition, contrasting this with a group of healthy aging individuals demonstrates changes in emotion processing specific to the degeneration of basal ganglia circuitry in PD. Early PD patients (EPD) were recruited from a randomized clinical trial testing the safety and tolerability of deep brain stimulation (DBS) of the subthalamic nucleus (STN-DBS) in early-staged PD. EPD patients were previously randomized to receive optimal drug therapy only (ODT), or drug therapy plus STN-DBS (ODT + DBS). Matched healthy elderly controls (HEC) and young controls (HYC) also participated in this study. Participants completed two control tasks and three emotion recognition tests that varied in stimulus domain. EPD patients were impaired on all emotion recognition tasks compared to HEC. Neither therapy type (ODT or ODT + DBS) nor therapy state (ON/OFF) altered emotion recognition performance in this study. Finally, HEC were impaired on vocal emotion recognition relative to HYC, suggesting a decline related to healthy aging. This study supports the existence of impaired emotion recognition early in the PD course, implicating an early disruption of fronto-striatal loops mediating emotional function.
Inconsistent emotion recognition deficits across stimulus modalities in Huntington׳s disease.
Rees, Elin M; Farmer, Ruth; Cole, James H; Henley, Susie M D; Sprengelmeyer, Reiner; Frost, Chris; Scahill, Rachael I; Hobbs, Nicola Z; Tabrizi, Sarah J
2014-11-01
Recognition of negative emotions is impaired in Huntington׳s Disease (HD). It is unclear whether these emotion-specific problems are driven by dissociable cognitive deficits, emotion complexity, test cue difficulty, or visuoperceptual impairments. This study set out to further characterise emotion recognition in HD by comparing patterns of deficits across stimulus modalities; notably including for the first time in HD, the more ecologically and clinically relevant modality of film clips portraying dynamic facial expressions. Fifteen early HD and 17 control participants were tested on emotion recognition from static facial photographs, non-verbal vocal expressions and one second dynamic film clips, all depicting different emotions. Statistically significant evidence of impairment of anger, disgust and fear recognition was seen in HD participants compared with healthy controls across multiple stimulus modalities. The extent of the impairment, as measured by the difference in the number of errors made between HD participants and controls, differed according to the combination of emotion and modality (p=0.013, interaction test). The largest between-group difference was seen in the recognition of anger from film clips. Consistent with previous reports, anger, disgust and fear were the most poorly recognised emotions by the HD group. This impairment did not appear to be due to task demands or expression complexity as the pattern of between-group differences did not correspond to the pattern of errors made by either group; implicating emotion-specific cognitive processing pathology. There was however evidence that the extent of emotion recognition deficits significantly differed between stimulus modalities. The implications in terms of designing future tests of emotion recognition and care giving are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gold, Rinat; Butler, Pamela; Revheim, Nadine; Leitman, David; Hansen, John A.; Gur, Ruben; Kantrowitz, Joshua T.; Laukka, Petri; Juslin, Patrik N.; Silipo, Gail S.; Javitt, Daniel C.
2013-01-01
Objective Schizophrenia is associated with deficits in ability to perceive emotion based upon tone of voice. The basis for this deficit, however, remains unclear and assessment batteries remain limited. We evaluated performance in schizophrenia on a novel voice emotion recognition battery with well characterized physical features, relative to impairments in more general emotional and cognitive function. Methods We studied in a primary sample of 92 patients relative to 73 controls. Stimuli were characterized according to both intended emotion and physical features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched controls, and 188 general comparison subjects. Results Patients showed significant, large effect size deficits in voice emotion recognition (F=25.4, p<.00001, d=1.1), and were preferentially impaired in recognition of emotion based upon pitch-, but not intensity-features (group X feature interaction: F=7.79, p=.006). Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=56, p<.0001) and within (r=.47, p<.0001) group. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample. Conclusions The present study demonstrates impairments in auditory emotion recognition in schizophrenia relative to acoustic features of underlying stimuli. Furthermore, it provides tools and highlights the need for greater attention to physical features of stimuli used for study of social cognition in neuropsychiatric disorders. PMID:22362394
Impairment in the recognition of emotion across different media following traumatic brain injury.
Williams, Claire; Wood, Rodger Ll
2010-02-01
The current study examined emotion recognition following traumatic brain injury (TBI) and examined whether performance differed according to the affective valence and type of media presentation of the stimuli. A total of 64 patients with TBI and matched controls completed the Emotion Evaluation Test (EET) and Ekman 60 Faces Test (E-60-FT). Patients with TBI also completed measures of information processing and verbal ability. Results revealed that the TBI group were significantly impaired compared to controls when recognizing emotion on the EET and E-60-FT. A significant main effect of valence was found in both groups, with poor recognition of negative emotions. However, the difference between the recognition of positive and negative emotions was larger in the TBI group. The TBI group were also more accurate recognizing emotion displayed in audiovisual media (EET) than that displayed in still media (E-60-FT). No significant relationship was obtained between emotion recognition tasks and information-processing speed. A significant positive relationship was found between the E-60-FT and one measure of verbal ability. These findings support models of emotion that specify separate neurological pathways for certain emotions and different media and confirm that patients with TBI are vulnerable to experiencing emotion recognition difficulties.
Facial recognition deficits as a potential endophenotype in bipolar disorder.
Vierck, Esther; Porter, Richard J; Joyce, Peter R
2015-11-30
Bipolar disorder (BD) is considered a highly heritable and genetically complex disorder. Several cognitive functions, such as executive functions and verbal memory have been suggested as promising candidates for endophenotypes. Although there is evidence for deficits in facial emotion recognition in individuals with BD, studies investigating these functions as endophenotypes are rare. The current study investigates emotion recognition as a potential endophenotype in BD by comparing 36 BD participants, 24 of their 1st degree relatives and 40 healthy control participants in a computerised facial emotion recognition task. Group differences were evaluated using repeated measurement analysis of co-variance with age as a covariate. Results revealed slowed emotion recognition for both BD and their relatives. Furthermore, BD participants were less accurate than healthy controls in their recognition of emotion expressions. We found no evidence of emotion specific differences between groups. Our results provide evidence for facial recognition as a potential endophenotype in BD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Clark, Uraina S.; Walker, Keenan A.; Cohen, Ronald A.; Devlin, Kathryn N.; Folkers, Anna M.; Pina, Mathew M.; Tashima, Karen T.
2015-01-01
Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV− associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities. PMID:25744868
ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.
Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu
2016-09-22
Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the realization of a practical EEG-based emotion recognition system.
ERIC Educational Resources Information Center
Rojahn, Johannes; And Others
1995-01-01
This literature review discusses 21 studies on facial emotion recognition by persons with mental retardation in terms of methodological characteristics, stimulus material, salient variables and their relation to recognition tasks, and emotion recognition deficits in mental retardation. A table provides comparative data on all 21 studies. (DB)
Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.
Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth
2016-03-01
Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.
The involvement of emotion recognition in affective theory of mind.
Mier, Daniela; Lis, Stefanie; Neuthe, Kerstin; Sauer, Carina; Esslinger, Christine; Gallhofer, Bernd; Kirsch, Peter
2010-11-01
This study was conducted to explore the relationship between emotion recognition and affective Theory of Mind (ToM). Forty subjects performed a facial emotion recognition and an emotional intention recognition task (affective ToM) in an event-related fMRI study. Conjunction analysis revealed overlapping activation during both tasks. Activation in some of these conjunctly activated regions was even stronger during affective ToM than during emotion recognition, namely in the inferior frontal gyrus, the superior temporal sulcus, the temporal pole, and the amygdala. In contrast to previous studies investigating ToM, we found no activation in the anterior cingulate, commonly assumed as the key region for ToM. The results point to a close relationship of emotion recognition and affective ToM and can be interpreted as evidence for the assumption that at least basal forms of ToM occur by an embodied, non-cognitive process. Copyright © 2010 Society for Psychophysiological Research.
Impaired Emotion Recognition in Music in Parkinson's Disease
ERIC Educational Resources Information Center
van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.
2010-01-01
Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction…
Emotion Recognition and Visual-Scan Paths in Fragile X Syndrome
ERIC Educational Resources Information Center
Shaw, Tracey A.; Porter, Melanie A.
2013-01-01
This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored.…
Benito, Adolfo; Lahera, Guillermo; Herrera, Sara; Muncharaz, Ramón; Benito, Guillermo; Fernández-Liria, Alberto; Montes, José Manuel
2013-01-01
To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD). Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40), identification (Facial Emotion Identification Test - FEIT), and discrimination (Facial Emotion Discrimination Test - FEDT) of facial emotions, as well as a theory of mind (ToM) verbal test (Hinting Task). Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Patients with BD scored significantly lower than controls on recognition (ER40), identification (FEIT), and discrimination (FEDT) of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST) was found. These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.
Sleep deprivation impairs the accurate recognition of human emotions.
van der Helm, Els; Gujar, Ninad; Walker, Matthew P
2010-03-01
Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Experimental laboratory study. Thirty-seven healthy participants, (21 females) aged 18-25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues.
Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?
Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James
2017-01-01
Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives. PMID:28068393
Glucocorticoid effects on object recognition memory require training-associated emotional arousal.
Okuda, Shoki; Roozendaal, Benno; McGaugh, James L
2004-01-20
Considerable evidence implicates glucocorticoid hormones in the regulation of memory consolidation and memory retrieval. The present experiments investigated whether the influence of these hormones on memory depends on the level of emotional arousal induced by the training experience. We investigated this issue in male Sprague-Dawley rats by examining the effects of immediate posttraining systemic injections of the glucocorticoid corticosterone on object recognition memory under two conditions that differed in their training-associated emotional arousal. In rats that were not previously habituated to the experimental context, corticosterone (0.3, 1.0, or 3.0 mg/kg, s.c.) administered immediately after a 3-min training trial enhanced 24-hr retention performance in an inverted-U shaped dose-response relationship. In contrast, corticosterone did not affect 24-hr retention of rats that received extensive prior habituation to the experimental context and, thus, had decreased novelty-induced emotional arousal during training. Additionally, immediate posttraining administration of corticosterone to nonhabituated rats, in doses that enhanced 24-hr retention, impaired object recognition performance at a 1-hr retention interval whereas corticosterone administered after training to well-habituated rats did not impair 1-hr retention. Thus, the present findings suggest that training-induced emotional arousal may be essential for glucocorticoid effects on object recognition memory.
ERIC Educational Resources Information Center
Herba, Catherine; Phillips, Mary
2004-01-01
Background: Intact emotion processing is critical for normal emotional development. Recent advances in neuroimaging have facilitated the examination of brain development, and have allowed for the exploration of the relationships between the development of emotion processing abilities, and that of associated neural systems. Methods: A literature…
EEG-based emotion recognition in music listening.
Lin, Yuan-Pin; Wang, Chi-Hong; Jung, Tzyy-Ping; Wu, Tien-Lin; Jeng, Shyh-Kang; Duann, Jeng-Ren; Chen, Jyh-Horng
2010-07-01
Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
Balconi, M; Cobelli, C
2015-02-26
The present research explored the cortical correlates of emotional memories in response to words and pictures. Subjects' performance (Accuracy Index, AI; response times, RTs; RTs/AI) was considered when a repetitive Transcranial Magnetic Stimulation (rTMS) was applied on the left dorsolateral prefrontal cortex (LDLPFC). Specifically, the role of LDLPFC was tested by performing a memory task, in which old (previously encoded targets) and new (previously not encoded distractors) emotional pictures/words had to be recognized. Valence (positive vs. negative) and arousing power (high vs. low) of stimuli were also modulated. Moreover, subjective evaluation of emotional stimuli in terms of valence/arousal was explored. We found significant performance improving (higher AI, reduced RTs, improved general performance) in response to rTMS. This "better recognition effect" was only related to specific emotional features, that is positive high arousal pictures or words. Moreover no significant differences were found between stimulus categories. A direct relationship was also observed between subjective evaluation of emotional cues and memory performance when rTMS was applied to LDLPFC. Supported by valence and approach model of emotions, we supposed that a left lateralized prefrontal system may induce a better recognition of positive high arousal words, and that evaluation of emotional cue is related to prefrontal activation, affecting the recognition memories of emotions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Brain correlates of musical and facial emotion recognition: evidence from the dementias.
Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R
2012-07-01
The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.
Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
The Effects of Cognitive Reappraisal and Expressive Suppression on Memory of Emotional Pictures.
Wang, Yan Mei; Chen, Jie; Han, Ben Yue
2017-01-01
In the field of emotion research, the influence of emotion regulation strategies on memory with emotional materials has been widely discussed in recent years. However, existing studies have focused exclusively on regulating negative emotion but not positive emotion. Therefore, in the present study, we investigated the influence of emotion regulation strategies for positive emotion on memory. One hundred and twenty college students were selected as participants. Emotional pictures (positive, negative and neutral) were selected from Chinese Affective Picture System (CAPS) as experimental materials. We employed a mixed, 4 (emotion regulation strategies: cognitive up-regulation, cognitive down-regulation, expressive suppression, passive viewing) × 3 (emotional pictures: positive, neutral, negative) experimental design. We investigated the influences of different emotion regulation strategies on memory performance, using free recall and recognition tasks with pictures varying in emotional content. The results showed that recognition and free recall memory performance of the cognitive reappraisal groups (up-regulation and down-regulation) were both better than that of the passive viewing group for all emotional pictures. No significant differences were reported in the two kinds of memory scores between the expressive suppression and passive viewing groups. The results also showed that the memory performance with the emotional pictures differed according to the form of memory test. For the recognition test, participants performed better with positive images than with neutral images. Free recall scores with negative images were higher than those with neutral images. These results suggest that both cognitive reappraisal regulation strategies (up-regulation and down-regulation) promoted explicit memories of the emotional content of stimuli, and the form of memory test influenced performance with emotional pictures.
Family environment influences emotion recognition following paediatric traumatic brain injury
SCHMIDT, ADAM T.; ORSTEN, KIMBERLEY D.; HANTEN, GERRI R.; LI, XIAOQI; LEVIN, HARVEY S.
2011-01-01
Objective This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). Methods A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Conclusions Findings suggest family functioning variables—especially financial resources—can influence performance on an emotional processing task following TBI in children. PMID:21058900
Herniman, Sarah E; Allott, Kelly A; Killackey, Eóin; Hester, Robert; Cotton, Sue M
2017-01-15
Comorbid depression is common in first-episode schizophrenia spectrum (FES) disorders. Both depression and FES are associated with significant deficits in facial and prosody emotion recognition performance. However, it remains unclear whether people with FES and comorbid depression, compared to those without comorbid depression, have overall poorer emotion recognition, or instead, a different pattern of emotion recognition deficits. The aim of this study was to compare facial and prosody emotion recognition performance between those with and without comorbid depression in FES. This study involved secondary analysis of baseline data from a randomized controlled trial of vocational intervention for young people with first-episode psychosis (N=82; age range: 15-25 years). Those with comorbid depression (n=24) had more accurate recognition of sadness in faces compared to those without comorbid depression. Severity of depressive symptoms was also associated with more accurate recognition of sadness in faces. Such results did not recur for prosody emotion recognition. In addition to the cross-sectional design, limitations of this study include the absence of facial and prosodic recognition of neutral emotions. Findings indicate a mood congruent negative bias in facial emotion recognition in those with comorbid depression and FES, and provide support for cognitive theories of depression that emphasise the role of such biases in the development and maintenance of depression. Longitudinal research is needed to determine whether mood-congruent negative biases are implicated in the development and maintenance of depression in FES, or whether such biases are simply markers of depressed state. Copyright © 2016 Elsevier B.V. All rights reserved.
Feeser, Melanie; Fan, Yan; Weigand, Anne; Hahn, Adam; Gärtner, Matti; Aust, Sabine; Böker, Heinz; Bajbouj, Malek; Grimm, Simone
2014-12-01
Previous studies have shown that oxytocin (OXT) enhances social cognitive processes. It has also been demonstrated that OXT does not uniformly facilitate social cognition. The effects of OXT administration strongly depend on the exposure to stressful experiences in early life. Emotional facial recognition is crucial for social cognition. However, no study has yet examined how the effects of OXT on the ability to identify emotional faces are altered by early life stress (ELS) experiences. Given the role of OXT in modulating social motivational processes, we specifically aimed to investigate its effects on the recognition of approach- and avoidance-related facial emotions. In a double-blind, between-subjects, placebo-controlled design, 82 male participants performed an emotion recognition task with faces taken from the "Karolinska Directed Emotional Faces" set. We clustered the six basic emotions along the dimensions approach (happy, surprise, anger) and avoidance (fear, sadness, disgust). ELS was assessed with the Childhood Trauma Questionnaire (CTQ). Our results showed that OXT improved the ability to recognize avoidance-related emotional faces as compared to approach-related emotional faces. Whereas the performance for avoidance-related emotions in participants with higher ELS scores was comparable in both OXT and placebo condition, OXT enhanced emotion recognition in participants with lower ELS scores. Independent of OXT administration, we observed increased emotion recognition for avoidance-related faces in participants with high ELS scores. Our findings suggest that the investigation of OXT on social recognition requires a broad approach that takes ELS experiences as well as motivational processes into account.
Baez, Sandra; Marengo, Juan; Perez, Ana; Huepe, David; Font, Fernanda Giralt; Rial, Veronica; Gonzalez-Gadea, María Luz; Manes, Facundo; Ibanez, Agustin
2015-09-01
Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task-dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real-life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs. © 2014 The British Psychological Society.
Corcoran, C M; Keilp, J G; Kayser, J; Klim, C; Butler, P D; Bruder, G E; Gur, R C; Javitt, D C
2015-10-01
Schizophrenia is characterized by profound and disabling deficits in the ability to recognize emotion in facial expression and tone of voice. Although these deficits are well documented in established schizophrenia using recently validated tasks, their predictive utility in at-risk populations has not been formally evaluated. The Penn Emotion Recognition and Discrimination tasks, and recently developed measures of auditory emotion recognition, were administered to 49 clinical high-risk subjects prospectively followed for 2 years for schizophrenia outcome, and 31 healthy controls, and a developmental cohort of 43 individuals aged 7-26 years. Deficit in emotion recognition in at-risk subjects was compared with deficit in established schizophrenia, and with normal neurocognitive growth curves from childhood to early adulthood. Deficits in emotion recognition significantly distinguished at-risk patients who transitioned to schizophrenia. By contrast, more general neurocognitive measures, such as attention vigilance or processing speed, were non-predictive. The best classification model for schizophrenia onset included both face emotion processing and negative symptoms, with accuracy of 96%, and area under the receiver-operating characteristic curve of 0.99. In a parallel developmental study, emotion recognition abilities were found to reach maturity prior to traditional age of risk for schizophrenia, suggesting they may serve as objective markers of early developmental insult. Profound deficits in emotion recognition exist in at-risk patients prior to schizophrenia onset. They may serve as an index of early developmental insult, and represent an effective target for early identification and remediation. Future studies investigating emotion recognition deficits at both mechanistic and predictive levels are strongly encouraged.
Horton, Leslie E; Bridgwater, Miranda A; Haas, Gretchen L
2017-05-01
Emotion recognition, a social cognition domain, is impaired in people with schizophrenia and contributes to social dysfunction. Whether impaired emotion recognition emerges as a manifestation of illness or predates symptoms is unclear. Findings from studies of emotion recognition impairments in first-degree relatives of people with schizophrenia are mixed and, to our knowledge, no studies have investigated the link between emotion recognition and social functioning in that population. This study examined facial affect recognition and social skills in 16 offspring of parents with schizophrenia (familial high-risk/FHR) compared to 34 age- and sex-matched healthy controls (HC), ages 7-19. As hypothesised, FHR children exhibited impaired overall accuracy, accuracy in identifying fearful faces, and overall recognition speed relative to controls. Age-adjusted facial affect recognition accuracy scores predicted parent's overall rating of their child's social skills for both groups. This study supports the presence of facial affect recognition deficits in FHR children. Importantly, as the first known study to suggest the presence of these deficits in young, asymptomatic FHR children, it extends findings to a developmental stage predating symptoms. Further, findings point to a relationship between early emotion recognition and social skills. Improved characterisation of deficits in FHR children could inform early intervention.
Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng
2015-10-08
Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P < .05). There were no correlations between executive functions and facial emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.
ERIC Educational Resources Information Center
Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2015-01-01
Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…
Kobayakawa, Mutsutaka; Kawamura, Mitsuru
2011-12-01
Social cognition includes various components of information processing related to communication with other individuals. In this review, we have discussed 3 components of social cognitive function: face recognition, empathy, and decision making. Our social behavior involves recognition based on facial features and also involves empathizing with others; while making decisions, it is important to consider the social consequences of the course of action followed. Face recognition is divided into 2 routes for information processing: a route responsible for overt recognition of the face's identity and a route for emotional and orienting responses based on the face's personal affective significance. Two systems are possibly involved in empathy: a basic emotional contagion "mirroring" system and a more advanced "theory of mind" system that considers the cognitive perspective. Decision making is mediated by a widespread system that includes several cortical and subcortical components. Numerous lesion and neuroimaging studies have contributed to clarifying the neural correlates of social cognitive function, and greater information can be obtained on social cognitive function by combining these 2 approaches.
Recognition of emotions in autism: a formal meta-analysis.
Uljarevic, Mirko; Hamilton, Antonia
2013-07-01
Determining the integrity of emotion recognition in autistic spectrum disorder is important to our theoretical understanding of autism and to teaching social skills. Previous studies have reported both positive and negative results. Here, we take a formal meta-analytic approach, bringing together data from 48 papers testing over 980 participants with autism. Results show there is an emotion recognition difficulty in autism, with a mean effect size of 0.80 which reduces to 0.41 when a correction for publication bias is applied. Recognition of happiness was only marginally impaired in autism, but recognition of fear was marginally worse than recognition of happiness. This meta-analysis provides an opportunity to survey the state of emotion recognition research in autism and to outline potential future directions.
ERIC Educational Resources Information Center
Golan, Ofer; Gordon, Ilanit; Fichman, Keren; Keinan, Giora
2018-01-01
Children with ASD show emotion recognition difficulties, as part of their social communication deficits. We examined facial emotion recognition (FER) in intellectually disabled children with ASD and in younger typically developing (TD) controls, matched on mental age. Our emotion-matching paradigm employed three different modalities: facial, vocal…
ERIC Educational Resources Information Center
Schmidt, Adam T.; Hanten, Gerri R.; Li, Xiaoqi; Orsten, Kimberley D.; Levin, Harvey S.
2010-01-01
Children with closed head injuries often experience significant and persistent disruptions in their social and behavioral functioning. Studies with adults sustaining a traumatic brain injury (TBI) indicate deficits in emotion recognition and suggest that these difficulties may underlie some of the social deficits. The goal of the current study was…
Schultebraucks, Katharina; Deuter, Christian E; Duesenberg, Moritz; Schulze, Lars; Hellmann-Regen, Julian; Domke, Antonia; Lockenvitz, Lisa; Kuehl, Linn K; Otte, Christian; Wingenfeld, Katja
2016-09-01
Selective attention toward emotional cues and emotion recognition of facial expressions are important aspects of social cognition. Stress modulates social cognition through cortisol, which acts on glucocorticoid (GR) and mineralocorticoid receptors (MR) in the brain. We examined the role of MR activation on attentional bias toward emotional cues and on emotion recognition. We included 40 healthy young women and 40 healthy young men (mean age 23.9 ± 3.3), who either received 0.4 mg of the MR agonist fludrocortisone or placebo. A dot-probe paradigm was used to test for attentional biases toward emotional cues (happy and sad faces). Moreover, we used a facial emotion recognition task to investigate the ability to recognize emotional valence (anger and sadness) from facial expression in four graded categories of emotional intensity (20, 30, 40, and 80 %). In the emotional dot-probe task, we found a main effect of treatment and a treatment × valence interaction. Post hoc analyses revealed an attentional bias away from sad faces after placebo intake and a shift in selective attention toward sad faces compared to placebo. We found no attentional bias toward happy faces after fludrocortisone or placebo intake. In the facial emotion recognition task, there was no main effect of treatment. MR stimulation seems to be important in modulating quick, automatic emotional processing, i.e., a shift in selective attention toward negative emotional cues. Our results confirm and extend previous findings of MR function. However, we did not find an effect of MR stimulation on emotion recognition.
Time course of brain activation elicited by basic emotions.
Hot, Pascal; Sequeira, Henrique
2013-11-13
Whereas facial emotion recognition protocols have shown that each discrete emotion has a specific time course of brain activation, there is no electrophysiological evidence to support these findings for emotional induction by complex pictures. Our objective was to specify the differences between the time courses of brain activation elicited by feelings of happiness and, with unpleasant pictures, by feelings of disgust and sadness. We compared event-related potentials (ERPs) elicited by the watching of high-arousing pictures from the International Affective Picture System, selected to induce specific emotions. In addition to a classical arousal effect on late positive components, we found specific ERP patterns for each emotion in early temporal windows (<200 ms). Disgust was the first emotion to be associated with different brain processing after 140 ms, whereas happiness and sadness differed in ERPs elicited at the frontal and central sites after 160 ms. Our findings highlight the limits of the classical averaging of ERPs elicited by different emotions inside the same valence and suggest that each emotion could elicit a specific temporal pattern of brain activation, similar to those observed with emotional face recognition.
Martínez-Castilla, Pastora; Burt, Michael; Borgatti, Renato; Gagliardi, Chiara
2015-01-01
In this study both the matching and developmental trajectories approaches were used to clarify questions that remain open in the literature on facial emotion recognition in Williams syndrome (WS) and Down syndrome (DS). The matching approach showed that individuals with WS or DS exhibit neither proficiency for the expression of happiness nor specific impairments for negative emotions. Instead, they present the same pattern of emotion recognition as typically developing (TD) individuals. Thus, the better performance on the recognition of positive compared to negative emotions usually reported in WS and DS is not specific of these populations but seems to represent a typical pattern. Prior studies based on the matching approach suggested that the development of facial emotion recognition is delayed in WS and atypical in DS. Nevertheless, and even though performance levels were lower in DS than in WS, the developmental trajectories approach used in this study evidenced that not only individuals with DS but also those with WS present atypical development in facial emotion recognition. Unlike in the TD participants, where developmental changes were observed along with age, in the WS and DS groups, the development of facial emotion recognition was static. Both individuals with WS and those with DS reached an early maximum developmental level due to cognitive constraints.
A wearable device for emotional recognition using facial expression and physiological response.
Jangho Kwon; Da-Hye Kim; Wanjoo Park; Laehyun Kim
2016-08-01
This paper introduces a glasses-typed wearable system to detect user's emotions using facial expression and physiological responses. The system is designed to acquire facial expression through a built-in camera and physiological responses such as photoplethysmogram (PPG) and electrodermal activity (EDA) in unobtrusive way. We used video clips for induced emotions to test the system suitability in the experiment. The results showed a few meaningful properties that associate emotions with facial expressions and physiological responses captured by the developed wearable device. We expect that this wearable system with a built-in camera and physiological sensors may be a good solution to monitor user's emotional state in daily life.
Age differences in right-wing authoritarianism and their relation to emotion recognition.
Ruffman, Ted; Wilson, Marc; Henry, Julie D; Dawson, Abigail; Chen, Yan; Kladnitski, Natalie; Myftari, Ella; Murray, Janice; Halberstadt, Jamin; Hunter, John A
2016-03-01
This study examined the correlates of right-wing authoritarianism (RWA) in older adults. Participants were given tasks measuring emotion recognition, executive functions and fluid IQ and questionnaires measuring RWA, perceived threat and social dominance orientation. Study 1 established higher age-related RWA across the age span in more than 2,600 New Zealanders. Studies 2 to 4 found that threat, education, social dominance and age all predicted unique variance in older adults' RWA, but the most consistent predictor was emotion recognition, predicting unique variance in older adults' RWA independent of all other variables. We argue that older adults' worse emotion recognition is associated with a more general change in social judgment. Expression of extreme attitudes (right- or left-wing) has the potential to antagonize others, but worse emotion recognition means that subtle signals will not be perceived, making the expression of extreme attitudes more likely. Our findings are consistent with other studies showing that worsening emotion recognition underlies age-related declines in verbosity, understanding of social gaffes, and ability to detect lies. Such results indicate that emotion recognition is a core social insight linked to many aspects of social cognition. (c) 2016 APA, all rights reserved).
Facial emotion recognition in patients with focal and diffuse axonal injury.
Yassin, Walid; Callahan, Brandy L; Ubukata, Shiho; Sugihara, Genichi; Murai, Toshiya; Ueda, Keita
2017-01-01
Facial emotion recognition impairment has been well documented in patients with traumatic brain injury. Studies exploring the neural substrates involved in such deficits have implicated specific grey matter structures (e.g. orbitofrontal regions), as well as diffuse white matter damage. Our study aims to clarify whether different types of injuries (i.e. focal vs. diffuse) will lead to different types of impairments on facial emotion recognition tasks, as no study has directly compared these patients. The present study examined performance and response patterns on a facial emotion recognition task in 14 participants with diffuse axonal injury (DAI), 14 with focal injury (FI) and 22 healthy controls. We found that, overall, participants with FI and DAI performed more poorly than controls on the facial emotion recognition task. Further, we observed comparable emotion recognition performance in participants with FI and DAI, despite differences in the nature and distribution of their lesions. However, the rating response pattern between the patient groups was different. This is the first study to show that pure DAI, without gross focal lesions, can independently lead to facial emotion recognition deficits and that rating patterns differ depending on the type and location of trauma.
Familiarity and face emotion recognition in patients with schizophrenia.
Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto
2014-01-01
To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-09-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on basic emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of complex emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general population control group (n = 24). Children with ASC performed lower than controls on the task. Using task scores, more than 87% of the participants were allocated to their group. This new test quantifies complex emotion and mental state recognition in life-like situations. Our findings reveal that children with ASC have residual difficulties in this aspect of empathy. The use of language-based compensatory strategies for emotion recognition is discussed.
[Emotional facial expression recognition impairment in Parkinson disease].
Lachenal-Chevallet, Karine; Bediou, Benoit; Bouvard, Martine; Thobois, Stéphane; Broussolle, Emmanuel; Vighetto, Alain; Krolak-Salmon, Pierre
2006-03-01
some behavioral disturbances observed in Parkinson's disease (PD) could be related to impaired recognition of various social messages particularly emotional facial expressions. facial expression recognition was assessed using morphed faces (five emotions: happiness, fear, anger, disgust, neutral), and compared to gender recognition and general cognitive assessment in 12 patients with Parkinson's disease and 14 controls subjects. facial expression recognition was impaired among patients, whereas gender recognitions, visuo-perceptive capacities and total efficiency were preserved. Post hoc analyses disclosed a deficit for fear and disgust recognition compared to control subjects. the impairment of emotional facial expression recognition in PD appears independent of other cognitive deficits. This impairment may be related to the dopaminergic depletion in basal ganglia and limbic brain regions. They could take a part in psycho-behavioral disorders and particularly in communication disorders observed in Parkinson's disease patients.
Emotion recognition in Parkinson's disease: Static and dynamic factors.
Wasser, Cory I; Evans, Felicity; Kempnich, Clare; Glikmann-Johnston, Yifat; Andrews, Sophie C; Thyagarajan, Dominic; Stout, Julie C
2018-02-01
The authors tested the hypothesis that Parkinson's disease (PD) participants would perform better in an emotion recognition task with dynamic (video) stimuli compared to a task using only static (photograph) stimuli and compared performances on both tasks to healthy control participants. In a within-subjects study, 21 PD participants and 20 age-matched healthy controls performed both static and dynamic emotion recognition tasks. The authors used a 2-way analysis of variance (controlling for individual participant variance) to determine the effect of group (PD, control) on emotion recognition performance in static and dynamic facial recognition tasks. Groups did not significantly differ in their performances on the static and dynamic tasks; however, the trend was suggestive that PD participants performed worse than controls. PD participants may have subtle emotion recognition deficits that are not ameliorated by the addition of contextual cues, similar to those found in everyday scenarios. Consistent with previous literature, the results suggest that PD participants may have underlying emotion recognition deficits, which may impact their social functioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
ERIC Educational Resources Information Center
Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus
2010-01-01
The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…
Yang, Hao; Zhang, Junran; Jiang, Xiaomei; Liu, Fei
2018-04-01
In recent years, with the rapid development of machine learning techniques,the deep learning algorithm has been widely used in one-dimensional physiological signal processing. In this paper we used electroencephalography (EEG) signals based on deep belief network (DBN) model in open source frameworks of deep learning to identify emotional state (positive, negative and neutrals), then the results of DBN were compared with support vector machine (SVM). The EEG signals were collected from the subjects who were under different emotional stimuli, and DBN and SVM were adopted to identify the EEG signals with changes of different characteristics and different frequency bands. We found that the average accuracy of differential entropy (DE) feature by DBN is 89.12%±6.54%, which has a better performance than previous research based on the same data set. At the same time, the classification effects of DBN are better than the results from traditional SVM (the average classification accuracy of 84.2%±9.24%) and its accuracy and stability have a better trend. In three experiments with different time points, single subject can achieve the consistent results of classification by using DBN (the mean standard deviation is1.44%), and the experimental results show that the system has steady performance and good repeatability. According to our research, the characteristic of DE has a better classification result than other characteristics. Furthermore, the Beta band and the Gamma band in the emotional recognition model have higher classification accuracy. To sum up, the performances of classifiers have a promotion by using the deep learning algorithm, which has a reference for establishing a more accurate system of emotional recognition. Meanwhile, we can trace through the results of recognition to find out the brain regions and frequency band that are related to the emotions, which can help us to understand the emotional mechanism better. This study has a high academic value and practical significance, so further investigation still needs to be done.
Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P
2016-03-01
Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps < 0.05). No differences were found on emotional face recognition between NSSI and SA groups. Secondary analyses showed that compared to inpatients without major depression, those with major depression made fewer errors on adult sad face recognition even when controlling for group status (p < 0.05). Further, compared to inpatients without generalized anxiety, those with generalized anxiety made fewer recognition errors on adult happy faces even when controlling for group status (p < 0.05). Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination of the role of context-dependent emotional processing are needed moving forward.
Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M
2013-01-01
This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.
Alfimova, M V; Golimbet, V E; Korovaitseva, G I; Lezheiko, T V; Abramova, L I; Aksenova, E V; Bolgov, M I
2014-01-01
The 5-HTTLPR SLC6A4 and catechol-o-methyltransferase (COMT) Val158Met polymorphisms are reported to be associated with processing of facial expressions in general population. Impaired recognition of facial expressions that is characteristic of schizophrenia negatively impacts on the social adaptation of the patients. To search for molecular mechanisms of this deficit, we studied main and epistatic effects of 5-HTTLPR and Val158Met polymorphisms on the facial emotion recognition in patients with schizophrenia (n=299) and healthy controls (n=232). The 5-HTTLPR polymorphism was associated with the emotion recognition in patients. The ll-homozygotes recognized facial emotions significantly better compared to those with an s-allele (F=8.00; p=0.005). Although the recognition of facial emotions was correlated with negative symptoms, verbal learning and trait anxiety, these variables did not significantly modified the association. In both groups, no effect of the COMT on the recognition of facial emotions was found.
Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.
Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T
2013-01-01
Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2012-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2011-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.
Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha
2015-09-01
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.
Sullivan, Susan; Campbell, Anna; Hutton, Sam B; Ruffman, Ted
2017-05-01
Research indicates that older adults' (≥60 years) emotion recognition is worse than that of young adults, young and older men's emotion recognition is worse than that of young and older women (respectively), older adults' looking at mouths compared with eyes is greater than that of young adults. Nevertheless, previous research has not compared older men's and women's looking at emotion faces so the present study had two aims: (a) to examine whether the tendency to look at mouths is stronger amongst older men compared with older women and (b) to examine whether men's mouth looking correlates with better emotion recognition. We examined the emotion recognition abilities and spontaneous gaze patterns of young (n = 60) and older (n = 58) males and females as they labelled emotion faces. Older men spontaneously looked more to mouths than older women, and older men's looking at mouths correlated with their emotion recognition, whereas women's looking at eyes correlated with their emotion recognition. The findings are discussed in relation to a growing body of research suggesting both age and gender differences in response to emotional stimuli and the differential efficacy of mouth and eyes looking for men and women. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Aging and Emotion Recognition: Not Just a Losing Matter
Sze, Jocelyn A.; Goodkind, Madeleine S.; Gyurak, Anett; Levenson, Robert W.
2013-01-01
Past studies on emotion recognition and aging have found evidence of age-related decline when emotion recognition was assessed by having participants detect single emotions depicted in static images of full or partial (e.g., eye region) faces. These tests afford good experimental control but do not capture the dynamic nature of real-world emotion recognition, which is often characterized by continuous emotional judgments and dynamic multi-modal stimuli. Research suggests that older adults often perform better under conditions that better mimic real-world social contexts. We assessed emotion recognition in young, middle-aged, and older adults using two traditional methods (single emotion judgments of static images of faces and eyes) and an additional method in which participants made continuous emotion judgments of dynamic, multi-modal stimuli (videotaped interactions between young, middle-aged, and older couples). Results revealed an age by test interaction. Largely consistent with prior research, we found some evidence that older adults performed worse than young adults when judging single emotions from images of faces (for sad and disgust faces only) and eyes (for older eyes only), with middle-aged adults falling in between. In contrast, older adults did better than young adults on the test involving continuous emotion judgments of dyadic interactions, with middle-aged adults falling in between. In tests in which target stimuli differed in age, emotion recognition was not facilitated by an age match between participant and target. These findings are discussed in terms of theoretical and methodological implications for the study of aging and emotional processing. PMID:22823183
McCade, Donna; Savage, Greg; Guastella, Adam; Hickie, Ian B; Lewis, Simon J G; Naismith, Sharon L
2013-09-01
Impaired emotion recognition in dementia is associated with increased patient agitation, behavior management difficulties, and caregiver burden. Emerging evidence supports the presence of very early emotion recognition difficulties in mild cognitive impairment (MCI); however, the relationship between these impairments and psychosocial measures is not yet explored. Emotion recognition abilities of 27 patients with nonamnestic MCI (naMCI), 29 patients with amnestic MCI (aMCI), and 22 control participants were assessed. Self-report measures assessed patient functional disability, while informants rated the degree of burden they experienced. Difficulties in recognizing anger was evident in the amnestic subtype. Although both the patient groups reported greater social functioning disability, compared with the controls, a relationship between social dysfunction and anger recognition was evident only for patients with naMCI. A significant association was found between burden and anger recognition in patients with aMCI. Impaired emotion recognition abilities impact MCI subtypes differentially. Interventions targeted at patients with MCI, and caregivers are warranted.
ERIC Educational Resources Information Center
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-01-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on "basic" emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of "complex" emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general…
Mancuso, Mauro; Magnani, Nadia; Cantagallo, Anna; Rossi, Giulia; Capitani, Donatella; Galletti, Vania; Cardamone, Giuseppe; Robertson, Ian Hamilton
2015-02-01
The aim of our study was to identify the common and separate mechanisms that might underpin emotion recognition impairment in patients with traumatic brain injury (TBI) and schizophrenia (Sz) compared with healthy controls (HCs). We recruited 21 Sz outpatients, 24 severe TBI outpatients, and 38 HCs, and we used eye-tracking to compare facial emotion processing performance. Both Sz and TBI patients were significantly poorer at recognizing facial emotions compared with HC. Sz patients showed a different way of exploring the Pictures of Facial Affects stimuli and were significantly worse in recognition of neutral expressions. Selective or sustained attention deficits in TBI may reduce efficient emotion recognition, whereas in Sz, there is a more strategic deficit underlying the observed problem. There would seem to be scope for adjustment of effective rehabilitative training focused on emotion recognition.
The recognition of facial emotion expressions in Parkinson's disease.
Assogna, Francesca; Pontieri, Francesco E; Caltagirone, Carlo; Spalletta, Gianfranco
2008-11-01
A limited number of studies in Parkinson's Disease (PD) suggest a disturbance of recognition of facial emotion expressions. In particular, disgust recognition impairment has been reported in unmedicated and medicated PD patients. However, the results are rather inconclusive in the definition of the degree and the selectivity of emotion recognition impairment, and an associated impairment of almost all basic facial emotions in PD is also described. Few studies have investigated the relationship with neuropsychiatric and neuropsychological symptoms with mainly negative results. This inconsistency may be due to many different problems, such as emotion assessment, perception deficit, cognitive impairment, behavioral symptoms, illness severity and antiparkinsonian therapy. Here we review the clinical characteristics and neural structures involved in the recognition of specific facial emotion expressions, and the plausible role of dopamine transmission and dopamine replacement therapy in these processes. It is clear that future studies should be directed to clarify all these issues.
Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P.; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as ‘reading’ the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of ‘biological motion’ versus ‘non-biological’ (or ‘scrambled’ motion); or (ii) the recognition of the ‘emotional state’ of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the ‘Reading the Mind in the Eyes Test’ (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree – be related to more basic differences in processing biological motion per se. PMID:21695266
Action and emotion recognition from point light displays: an investigation of gender differences.
Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole
2011-01-01
Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.
Hoffmann, Holger; Kessler, Henrik; Eppel, Tobias; Rukavina, Stefanie; Traue, Harald C
2010-11-01
Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Janning, Ruth; Schatten, Carlotta; Schmidt-Thieme, Lars
2016-01-01
Recognising students' emotion, affect or cognition is a relatively young field and still a challenging task in the area of intelligent tutoring systems. There are several ways to use the output of these recognition tasks within the system. The approach most often mentioned in the literature is using it for giving feedback to the students. The…
Rodrigo-Ruiz, D; Perez-Gonzalez, J C; Cejudo, J
2017-08-16
It has recently been warned that children with attention deficit hyperactivity disorder (ADHD) show a deficit in emotional competence and emotional intelligence, specifically in their ability to emotional recognition. A systematic review of the scientific literature in reference to the emotional recognition of facial expressions in children with ADHD is presented in order to establish or rule the existence of emotional deficits as primary dysfunction in this disorder and, where appropriate, the effect size of the differences against normal development or neurotypical children. The results reveal the recent interest in the issue and the lack of information. Although there is no complete agreement, most of the studies show that emotional recognition of facial expressions is affected in children with ADHD, showing them significantly less accurate than children from control groups in recognizing emotions communicated through facial expressions. A part of these studies make comparisons on the recognition of different discrete emotions; having observed that children with ADHD tend to a greater difficulty recognizing negative emotions, especially anger, fear, and disgust. These results have direct implications for the educational and clinical diagnosis of ADHD; and for the educational intervention for children with ADHD, emotional education might entail an advantageous aid.
Influences on Facial Emotion Recognition in Deaf Children
ERIC Educational Resources Information Center
Sidera, Francesc; Amadó, Anna; Martínez, Laura
2017-01-01
This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…
The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition
ERIC Educational Resources Information Center
Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.
2006-01-01
This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…
Towards Real-Time Speech Emotion Recognition for Affective E-Learning
ERIC Educational Resources Information Center
Bahreini, Kiavash; Nadolski, Rob; Westera, Wim
2016-01-01
This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILTWAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner's vocal intonations and facial expressions in order…
Emotion Recognition Abilities and Empathy of Victims of Bullying
ERIC Educational Resources Information Center
Woods, Sarah; Wolke, Dieter; Nowicki, Stephen; Hall, Lynne
2009-01-01
Objectives: Bullying is a form of systematic abuse by peers with often serious consequences for victims. Few studies have considered the role of emotion recognition abilities and empathic behaviour for different bullying roles. This study investigated physical and relational bullying involvement in relation to basic emotion recognition abilities,…
Lysaker, Paul H; Leonhardt, Bethany L; Brüne, Martin; Buck, Kelly D; James, Alison; Vohs, Jenifer; Francis, Michael; Hamm, Jay A; Salvatore, Giampaolo; Ringer, Jamie M; Dimaggio, Giancarlo
2014-09-30
While many with schizophrenia spectrum disorders experience difficulties understanding the feelings of others, little is known about the psychological antecedents of these deficits. To explore these issues we examined whether deficits in mental state decoding, mental state reasoning and metacognitive capacity predict performance on an emotion recognition task. Participants were 115 adults with a schizophrenia spectrum disorder and 58 adults with substance use disorders but no history of a diagnosis of psychosis who completed the Eyes and Hinting Test. Metacognitive capacity was assessed using the Metacognitive Assessment Scale Abbreviated and emotion recognition was assessed using the Bell Lysaker Emotion Recognition Test. Results revealed that the schizophrenia patients performed more poorly than controls on tests of emotion recognition, mental state decoding, mental state reasoning and metacognition. Lesser capacities for mental state decoding, mental state reasoning and metacognition were all uniquely related emotion recognition within the schizophrenia group even after controlling for neurocognition and symptoms in a stepwise multiple regression. Results suggest that deficits in emotion recognition in schizophrenia may partly result from a combination of impairments in the ability to judge the cognitive and affective states of others and difficulties forming complex representations of self and others. Published by Elsevier Ireland Ltd.
Basic and complex emotion recognition in children with autism: cross-cultural findings.
Fridenson-Hayo, Shimrit; Berggren, Steve; Lassalle, Amandine; Tal, Shahar; Pigat, Delia; Bölte, Sven; Baron-Cohen, Simon; Golan, Ofer
2016-01-01
Children with autism spectrum conditions (ASC) have emotion recognition deficits when tested in different expression modalities (face, voice, body). However, these findings usually focus on basic emotions, using one or two expression modalities. In addition, cultural similarities and differences in emotion recognition patterns in children with ASC have not been explored before. The current study examined the similarities and differences in the recognition of basic and complex emotions by children with ASC and typically developing (TD) controls across three cultures: Israel, Britain, and Sweden. Fifty-five children with high-functioning ASC, aged 5-9, were compared to 58 TD children. On each site, groups were matched on age, sex, and IQ. Children were tested using four tasks, examining recognition of basic and complex emotions from voice recordings, videos of facial and bodily expressions, and emotional video scenarios including all modalities in context. Compared to their TD peers, children with ASC showed emotion recognition deficits in both basic and complex emotions on all three modalities and their integration in context. Complex emotions were harder to recognize, compared to basic emotions for the entire sample. Cross-cultural agreement was found for all major findings, with minor deviations on the face and body tasks. Our findings highlight the multimodal nature of ER deficits in ASC, which exist for basic as well as complex emotions and are relatively stable cross-culturally. Cross-cultural research has the potential to reveal both autism-specific universal deficits and the role that specific cultures play in the way empathy operates in different countries.
Uskul, Ayse K; Paulmann, Silke; Weick, Mario
2016-02-01
Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).
Age, gender, and puberty influence the development of facial emotion recognition.
Lawrence, Kate; Campbell, Ruth; Skuse, David
2015-01-01
Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.
Age, gender, and puberty influence the development of facial emotion recognition
Lawrence, Kate; Campbell, Ruth; Skuse, David
2015-01-01
Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers. PMID:26136697
Emotion Recognition in Preschool Children: Associations with Maternal Depression and Early Parenting
Kujawa, Autumn; Dougherty, Lea; Durbin, C. Emily; Laptook, Rebecca; Torpey, Dana; Klein, Daniel N.
2013-01-01
Emotion knowledge in childhood has been shown to predict social functioning and psychological well-being, but relatively little is known about parental factors that influence its development in early childhood. There is some evidence that both parenting behavior and maternal depression are associated with emotion recognition, but previous research has only examined these factors independently. The current study assessed auditory and visual emotion recognition ability among a large sample of preschool children to examine typical emotion recognition skills in children of this age, as well as the independent and interactive effects of maternal and paternal depression and negative parenting (i.e., hostility and intrusiveness). Results indicated that children were most accurate at identifying happy emotional expressions, followed by other basic emotions. The lowest accuracy was observed for neutral expressions. A significant interaction was found between maternal depression and negative parenting behavior, such that children with a maternal history of depression were particularly sensitive to the negative effects of maladaptive parenting behavior on emotion recognition ability. No significant effects were found for paternal depression. These results highlight the importance of examining the effects of multiple interacting factors on children’s emotional development, and provide suggestions for identifying children for targeted preventive interventions. PMID:24444174
Emotion recognition ability in mothers at high and low risk for child physical abuse.
Balge, K A; Milner, J S
2000-10-01
The study sought to determine if high-risk, compared to low-risk, mothers make more emotion recognition errors when they attempt to recognize emotions in children and adults. Thirty-two demographically matched high-risk (n = 16) and low-risk (n = 16) mothers were asked to identify different emotions expressed by children and adults. Sets of high- and low-intensity, visual and auditory emotions were presented. Mothers also completed measures of stress, depression, and ego-strength. High-risk, compared to low-risk, mothers showed a tendency to make more errors on the visual and auditory emotion recognition tasks, with a trend toward more errors on the low-intensity, visual stimuli. However, the observed trends were not significant. Only a post-hoc test of error rates across all stimuli indicated that high-risk, compared to low-risk, mothers made significantly more emotion recognition errors. Although situational stress differences were not found, high-risk mothers reported significantly higher levels of general parenting stress and depression and lower levels of ego-strength. Since only trends and a significant post hoc finding of more overall emotion recognition errors in high-risk mothers were observed, additional research is needed to determine if high-risk mothers have emotion recognition deficits that may impact parent-child interactions. As in prior research, the study found that high-risk mothers reported more parenting stress and depression and less ego-strength.
Recognition profile of emotions in natural and virtual faces.
Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Ruben C; Gur, Rurben C; Mathiak, Klaus
2008-01-01
Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.
Recognition Profile of Emotions in Natural and Virtual Faces
Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Gur, Rurben C.; Mathiak, Klaus
2008-01-01
Background Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Methodology/Principal Findings Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Conclusions/Significance Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications. PMID:18985152
Embodied emotion impairment in Huntington's Disease.
Trinkler, Iris; Devignevielle, Sévérine; Achaibou, Amal; Ligneul, Romain V; Brugières, Pierre; Cleret de Langavant, Laurent; De Gelder, Beatrice; Scahill, Rachael; Schwartz, Sophie; Bachoud-Lévi, Anne-Catherine
2017-07-01
Theories of embodied cognition suggest that perceiving an emotion involves somatovisceral and motoric re-experiencing. Here we suggest taking such an embodied stance when looking at emotion processing deficits in patients with Huntington's Disease (HD), a neurodegenerative motor disorder. The literature on these patients' emotion recognition deficit has recently been enriched by some reports of impaired emotion expression. The goal of the study was to find out if expression deficits might be linked to a more motoric level of impairment. We used electromyography (EMG) to compare voluntary emotion expression from words to emotion imitation from static face images, and spontaneous emotion mimicry in 28 HD patients and 24 matched controls. For the latter two imitation conditions, an underlying emotion understanding is not imperative (even though performance might be helped by it). EMG measures were compared to emotion recognition and to the capacity to identify and describe emotions using alexithymia questionnaires. Alexithymia questionnaires tap into the more somato-visceral or interoceptive aspects of emotion perception. Furthermore, we correlated patients' expression and recognition scores to cerebral grey matter volume using voxel-based morphometry (VBM). EMG results replicated impaired voluntary emotion expression in HD. Critically, voluntary imitation and spontaneous mimicry were equally impaired and correlated with impaired recognition. By contrast, alexithymia scores were normal, suggesting that emotion representations on the level of internal experience might be spared. Recognition correlated with brain volume in the caudate as well as in areas previously associated with shared action representations, namely somatosensory, posterior parietal, posterior superior temporal sulcus (pSTS) and subcentral sulcus. Together, these findings indicate that in these patients emotion deficits might be tied to the "motoric level" of emotion expression. Such a double-sided recognition and expression impairment may have important consequences, interrupting empathy in nonverbal communication both ways (understanding and being understood), independently of intact internal experience of emotion. Copyright © 2017 Elsevier Ltd. All rights reserved.
The influence of emotion on keyboard typing: an experimental study using visual stimuli.
Lee, Po-Ming; Tsui, Wei-Hsuan; Hsiao, Tzu-Chien
2014-06-20
Emotion recognition technology plays the essential role of enhancement in Human-Computer Interaction (HCI). In recent years, a novel approach for emotion recognition has been reported, which is by keystroke dynamics. This approach can be considered to be rather desirable in HCI because the data used is rather non-intrusive and easy to obtain. However, there were only limited investigations about the phenomenon itself in previous studies. This study aims to examine the source of variance in keystroke typing patterns caused by emotions. A controlled experiment to collect subjects' keystroke data in different emotional states induced by International Affective Picture System (IAPS) was conducted. Two-way Valence (3) × Arousal (3) ANOVAs were used to examine the collected dataset. The results of the experiment indicate that the effect of emotion is significant (p<.001) in the keystroke duration, keystroke latency, and accuracy rate of the keyboard typing. However, the size of the emotional effect is small, compare to the individual variability. Our findings support the conclusion that the keystroke duration, keystroke latency, and also the accuracy rate of typing, are influenced by emotional states. Notably, the finding about the size of effect suggests that the accuracy rate of the emotion recognition could be further improved if personalized models are utilized. On the other hand, the finding also provides an explanation of why real-world applications which authenticate the identity of users by monitoring keystrokes may not be interfered by the emotional states of users. The experiment was conducted using standard instruments and hence is expected to be highly reproducible.
The Relationships among Facial Emotion Recognition, Social Skills, and Quality of Life.
ERIC Educational Resources Information Center
Simon, Elliott W.; And Others
1995-01-01
Forty-six institutionalized adults with mild or moderate mental retardation were administered the Vineland Adaptive Behavior Scales (socialization domain), a subjective measure of quality of life, and a facial emotion recognition test. Facial emotion recognition, quality of life, and social skills appeared to be independent of one another. Facial…
Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition
ERIC Educational Resources Information Center
Freitag, Claudia; Schwarzer, Gudrun
2011-01-01
Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…
Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns
Noh, Soo Rim; Isaacowitz, Derek M.
2014-01-01
While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713
Preti, Emanuele; Richetin, Juliette; Suttora, Chiara; Pisani, Alberto
2016-04-30
Dysfunctions in social cognition characterize personality disorders. However, mixed results emerged from literature on emotion processing. Borderline Personality Disorder (BPD) traits are either associated with enhanced emotion recognition, impairments, or equal functioning compared to controls. These apparent contradictions might result from the complexity of emotion recognition tasks used and from individual differences in impulsivity and effortful control. We conducted a study in a sample of undergraduate students (n=80), assessing BPD traits, using an emotion recognition task that requires the processing of only visual information or both visual and acoustic information. We also measured individual differences in impulsivity and effortful control. Results demonstrated the moderating role of some components of impulsivity and effortful control on the capability of BPD traits in predicting anger and happiness recognition. We organized the discussion around the interaction between different components of regulatory functioning and task complexity for a better understanding of emotion recognition in BPD samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C
2014-01-01
The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS. PMID:24067301
Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C
2014-02-01
The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.
Towards a smart glove: arousal recognition based on textile Electrodermal Response.
Valenza, Gaetano; Lanata, Antonio; Scilingo, Enzo Pasquale; De Rossi, Danilo
2010-01-01
This paper investigates the possibility of using Electrodermal Response, acquired by a sensing fabric glove with embedded textile electrodes, as reliable means for emotion recognition. Here, all the essential steps for an automatic recognition system are described, from the recording of physiological data set to a feature-based multiclass classification. Data were collected from 35 healthy volunteers during arousal elicitation by means of International Affective Picture System (IAPS) pictures. Experimental results show high discrimination after twenty steps of cross validation.
NASA Astrophysics Data System (ADS)
Campo, D.; Quintero, O. L.; Bastidas, M.
2016-04-01
We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.
Dmitrieva, E S; Gel'man, V Ia
2011-01-01
The listener-distinctive features of recognition of different emotional intonations (positive, negative and neutral) of male and female speakers in the presence or absence of background noise were studied in 49 adults aged 20-79 years. In all the listeners noise produced the most pronounced decrease in recognition accuracy for positive emotional intonation ("joy") as compared to other intonations, whereas it did not influence the recognition accuracy of "anger" in 65-79-year-old listeners. The higher emotion recognition rates of a noisy signal were observed for speech emotional intonations expressed by female speakers. Acoustic characteristics of noisy and clear speech signals underlying perception of speech emotional prosody were found for adult listeners of different age and gender.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia
An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
Associations between facial emotion recognition and young adolescents’ behaviors in bullying
Gini, Gianluca; Altoè, Gianmarco
2017-01-01
This study investigated whether different behaviors young adolescents can act during bullying episodes were associated with their ability to recognize morphed facial expressions of the six basic emotions, expressed at high and low intensity. The sample included 117 middle-school students (45.3% girls; mean age = 12.4 years) who filled in a peer nomination questionnaire and individually performed a computerized emotion recognition task. Bayesian generalized mixed-effects models showed a complex picture, in which type and intensity of emotions, students’ behavior and gender interacted in explaining recognition accuracy. Results were discussed with a particular focus on negative emotions and suggesting a “neutral” nature of emotion recognition ability, which does not necessarily lead to moral behavior but can also be used for pursuing immoral goals. PMID:29131871
Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.
Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M
2017-01-01
Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).
Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-01-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446
Castro-Vale, Ivone; Severo, Milton; Carvalho, Davide; Mota-Cardoso, Rui
2015-01-01
Emotion recognition is very important for social interaction. Several mental disorders influence facial emotion recognition. War veterans and their offspring are subject to an increased risk of developing psychopathology. Emotion recognition is an important aspect that needs to be addressed in this population. To our knowledge, no test exists that is validated for use with war veterans and their offspring. The current study aimed to validate the JACFEE photo set to study facial emotion recognition in war veterans and their offspring. The JACFEE photo set was presented to 135 participants, comprised of 62 male war veterans and 73 war veterans' offspring. The participants identified the facial emotion presented from amongst the possible seven emotions that were tested for: anger, contempt, disgust, fear, happiness, sadness, and surprise. A loglinear model was used to evaluate whether the agreement between the intended and the chosen emotions was higher than the expected. Overall agreement between chosen and intended emotions was 76.3% (Cohen kappa = 0.72). The agreement ranged from 63% (sadness expressions) to 91% (happiness expressions). The reliability by emotion ranged from 0.617 to 0.843 and the overall JACFEE photo set Cronbach alpha was 0.911. The offspring showed higher agreement when compared with the veterans (RR: 41.52 vs 12.12, p < 0.001), which confirms the construct validity of the test. The JACFEE set of photos showed good validity and reliability indices, which makes it an adequate instrument for researching emotion recognition ability in the study sample of war veterans and their respective offspring.
Castro-Vale, Ivone; Severo, Milton; Carvalho, Davide; Mota-Cardoso, Rui
2015-01-01
Emotion recognition is very important for social interaction. Several mental disorders influence facial emotion recognition. War veterans and their offspring are subject to an increased risk of developing psychopathology. Emotion recognition is an important aspect that needs to be addressed in this population. To our knowledge, no test exists that is validated for use with war veterans and their offspring. The current study aimed to validate the JACFEE photo set to study facial emotion recognition in war veterans and their offspring. The JACFEE photo set was presented to 135 participants, comprised of 62 male war veterans and 73 war veterans’ offspring. The participants identified the facial emotion presented from amongst the possible seven emotions that were tested for: anger, contempt, disgust, fear, happiness, sadness, and surprise. A loglinear model was used to evaluate whether the agreement between the intended and the chosen emotions was higher than the expected. Overall agreement between chosen and intended emotions was 76.3% (Cohen kappa = 0.72). The agreement ranged from 63% (sadness expressions) to 91% (happiness expressions). The reliability by emotion ranged from 0.617 to 0.843 and the overall JACFEE photo set Cronbach alpha was 0.911. The offspring showed higher agreement when compared with the veterans (RR: 41.52 vs 12.12, p < 0.001), which confirms the construct validity of the test. The JACFEE set of photos showed good validity and reliability indices, which makes it an adequate instrument for researching emotion recognition ability in the study sample of war veterans and their respective offspring. PMID:26147938
Valence and the development of immediate and long-term false memory illusions.
Howe, Mark L; Candel, Ingrid; Otgaar, Henry; Malone, Catherine; Wimmer, Marina C
2010-01-01
Across five experiments we examined the role of valence in children's and adults' true and false memories. Using the Deese/Roediger-McDermott paradigm and either neutral or negative-emotional lists, both adults' (Experiment 1) and children's (Experiment 2) true recall and recognition was better for neutral than negative items, and although false recall was also higher for neutral items, false recognition was higher for negative items. The last three experiments examined adults' (Experiment 3) and children's (Experiments 4 and 5) 1-week long-term recognition of neutral and negative-emotional information. The results replicated the immediate recall and recognition findings from the first two experiments. More important, these experiments showed that although true recognition decreased over the 1-week interval, false recognition of neutral items remained unchanged whereas false recognition of negative-emotional items increased. These findings are discussed in terms of theories of emotion and memory as well as their forensic implications.
Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.
2014-01-01
Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469
Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S
2013-12-01
In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.
Gender Differences in the Recognition of Vocal Emotions
Lausen, Adi; Schacht, Annekathrin
2018-01-01
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody. PMID:29922202
Emotional memory for musical excerpts in young and older adults
Alonso, Irene; Dellacherie, Delphine; Samson, Séverine
2015-01-01
The emotions evoked by music can enhance recognition of excerpts. It has been suggested that memory is better for high than for low arousing music (Eschrich et al., 2005; Samson et al., 2009), but it remains unclear whether positively (Eschrich et al., 2008) or negatively valenced music (Aubé et al., 2013; Vieillard and Gilet, 2013) may be better recognized. Moreover, we still know very little about the influence of age on emotional memory for music. To address these issues, we tested emotional memory for music in young and older adults using musical excerpts varying in terms of arousal and valence. Participants completed immediate and 24 h delayed recognition tests. We predicted highly arousing excerpts to be better recognized by both groups in immediate recognition. We hypothesized that arousal may compensate consolidation deficits in aging, thus showing more prominent benefit of high over low arousing stimuli in older than younger adults on delayed recognition. We also hypothesized worst retention of negative excerpts for the older group, resulting in a recognition benefit for positive over negative excerpts specific to older adults. Our results suggest that although older adults had worse recognition than young adults overall, effects of emotion on memory do not seem to be modified by aging. Results on immediate recognition suggest that recognition of low arousing excerpts can be affected by valence, with better memory for positive relative to negative low arousing music. However, 24 h delayed recognition results demonstrate effects of emotion on memory consolidation regardless of age, with a recognition benefit for high arousal and for negatively valenced music. The present study highlights the role of emotion on memory consolidation. Findings are examined in light of the literature on emotional memory for music and for other stimuli. We finally discuss the implication of the present results for potential music interventions in aging and dementia. PMID:25814950
Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?
ERIC Educational Resources Information Center
Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina
2016-01-01
The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…
Jürgens, Rebecca; Grass, Annika; Drolet, Matthis; Fischer, Julia
Both in the performative arts and in emotion research, professional actors are assumed to be capable of delivering emotions comparable to spontaneous emotional expressions. This study examines the effects of acting training on vocal emotion depiction and recognition. We predicted that professional actors express emotions in a more realistic fashion than non-professional actors. However, professional acting training may lead to a particular speech pattern; this might account for vocal expressions by actors that are less comparable to authentic samples than the ones by non-professional actors. We compared 80 emotional speech tokens from radio interviews with 80 re-enactments by professional and inexperienced actors, respectively. We analyzed recognition accuracies for emotion and authenticity ratings and compared the acoustic structure of the speech tokens. Both play-acted conditions yielded similar recognition accuracies and possessed more variable pitch contours than the spontaneous recordings. However, professional actors exhibited signs of different articulation patterns compared to non-trained speakers. Our results indicate that for emotion research, emotional expressions by professional actors are not better suited than those from non-actors.
Biases in facial and vocal emotion recognition in chronic schizophrenia
Dondaine, Thibaut; Robert, Gabriel; Péron, Julie; Grandjean, Didier; Vérin, Marc; Drapier, Dominique; Millet, Bruno
2014-01-01
There has been extensive research on impaired emotion recognition in schizophrenia in the facial and vocal modalities. The literature points to biases toward non-relevant emotions for emotional faces but few studies have examined biases in emotional recognition across different modalities (facial and vocal). In order to test emotion recognition biases, we exposed 23 patients with stabilized chronic schizophrenia and 23 healthy controls (HCs) to emotional facial and vocal tasks asking them to rate emotional intensity on visual analog scales. We showed that patients with schizophrenia provided higher intensity ratings on the non-target scales (e.g., surprise scale for fear stimuli) than HCs for the both tasks. Furthermore, with the exception of neutral vocal stimuli, they provided the same intensity ratings on the target scales as the HCs. These findings suggest that patients with chronic schizophrenia have emotional biases when judging emotional stimuli in the visual and vocal modalities. These biases may stem from a basic sensorial deficit, a high-order cognitive dysfunction, or both. The respective roles of prefrontal-subcortical circuitry and the basal ganglia are discussed. PMID:25202287
Kempnich, Clare L; Wong, Dana; Georgiou-Karistianis, Nellie; Stout, Julie C
2017-04-01
Deficits in the recognition of negative emotions emerge before clinical diagnosis in Huntington's disease (HD). To address emotion recognition deficits, which have been shown in schizophrenia to be improved by computerized training, we conducted a study of the feasibility and efficacy of computerized training of emotion recognition in HD. We randomly assigned 22 individuals with premanifest or early symptomatic HD to the training or control group. The training group used a self-guided online training program, MicroExpression Training Tool (METT), twice weekly for 4 weeks. All participants completed measures of emotion recognition at baseline and post-training time-points. Participants in the training group also completed training adherence measures. Participants in the training group completed seven of the eight sessions on average. Results showed a significant group by time interaction, indicating that METT training was associated with improved accuracy in emotion recognition. Although sample size was small, our study demonstrates that emotion recognition remediation using the METT is feasible in terms of training adherence. The evidence also suggests METT may be effective in premanifest or early-symptomatic HD, opening up a potential new avenue for intervention. Further study with a larger sample size is needed to replicate these findings, and to characterize the durability and generalizability of these improvements, and their impact on functional outcomes in HD. (JINS, 2017, 23, 314-321).
Encoding conditions affect recognition of vocally expressed emotions across cultures.
Jürgens, Rebecca; Drolet, Matthis; Pirow, Ralph; Scheiner, Elisabeth; Fischer, Julia
2013-01-01
Although the expression of emotions in humans is considered to be largely universal, cultural effects contribute to both emotion expression and recognition. To disentangle the interplay between these factors, play-acted and authentic (non-instructed) vocal expressions of emotions were used, on the assumption that cultural effects may contribute differentially to the recognition of staged and spontaneous emotions. Speech tokens depicting four emotions (anger, sadness, joy, fear) were obtained from German radio archives and re-enacted by professional actors, and presented to 120 participants from Germany, Romania, and Indonesia. Participants in all three countries were poor at distinguishing between play-acted and spontaneous emotional utterances (58.73% correct on average with only marginal cultural differences). Nevertheless, authenticity influenced emotion recognition: across cultures, anger was recognized more accurately when play-acted (z = 15.06, p < 0.001) and sadness when authentic (z = 6.63, p < 0.001), replicating previous findings from German populations. German subjects revealed a slight advantage in recognizing emotions, indicating a moderate in-group advantage. There was no difference between Romanian and Indonesian subjects in the overall emotion recognition. Differential cultural effects became particularly apparent in terms of differential biases in emotion attribution. While all participants labeled play-acted expressions as anger more frequently than expected, German participants exhibited a further bias toward choosing anger for spontaneous stimuli. In contrast to the German sample, Romanian and Indonesian participants were biased toward choosing sadness. These results support the view that emotion recognition rests on a complex interaction of human universals and cultural specificities. Whether and in which way the observed biases are linked to cultural differences in self-construal remains an issue for further investigation.
Mirandola, Chiara; Toffalini, Enrico; Grassano, Massimo; Cornoldi, Cesare; Melinder, Annika
2014-01-01
The present experiment was conducted to investigate whether negative emotionally charged and arousing content of to-be-remembered scripted material would affect propensity towards memory distortions. We further investigated whether elaboration of the studied material through free recall would affect the magnitude of memory errors. In this study participants saw eight scripts. Each of the scripts included an effect of an action, the cause of which was not presented. Effects were either negatively emotional or neutral. Participants were assigned to either a yes/no recognition test group (recognition), or to a recall and yes/no recognition test group (elaboration + recognition). Results showed that participants in the recognition group produced fewer memory errors in the emotional condition. Conversely, elaboration + recognition participants had lower accuracy and produced more emotional memory errors than the other group, suggesting a mediating role of semantic elaboration on the generation of false memories. The role of emotions and semantic elaboration on the generation of false memories is discussed.
Instructions to mimic improve facial emotion recognition in people with sub-clinical autism traits.
Lewis, Michael B; Dunn, Emily
2017-11-01
People tend to mimic the facial expression of others. It has been suggested that this helps provide social glue between affiliated people but it could also aid recognition of emotions through embodied cognition. The degree of facial mimicry, however, varies between individuals and is limited in people with autism spectrum conditions (ASC). The present study sought to investigate the effect of promoting facial mimicry during a facial-emotion-recognition test. In two experiments, participants without an ASC diagnosis had their autism quotient (AQ) measured. Following a baseline test, they did an emotion-recognition test again but half of the participants were asked to mimic the target face they saw prior to making their responses. Mimicry improved emotion recognition, and further analysis revealed that the largest improvement was for participants who had higher scores on the autism traits. In fact, recognition performance was best overall for people who had high AQ scores but also received the instruction to mimic. Implications for people with ASC are explored.
Limbrecht-Ecklundt, Kerstin; Scheck, Andreas; Jerg-Bretzke, Lucia; Walter, Steffen; Hoffmann, Holger; Traue, Harald C.
2013-01-01
Objective: This article includes the examination of potential methodological problems of the application of a forced choice response format in facial emotion recognition. Methodology: 33 subjects were presented with validated facial stimuli. The task was to make a decision about which emotion was shown. In addition, the subjective certainty concerning the decision was recorded. Results: The detection rates are 68% for fear, 81% for sadness, 85% for anger, 87% for surprise, 88% for disgust, and 94% for happiness, and are thus well above the random probability. Conclusion: This study refutes the concern that the use of forced choice formats may not adequately reflect actual recognition performance. The use of standardized tests to examine emotion recognition ability leads to valid results and can be used in different contexts. For example, the images presented here appear suitable for diagnosing deficits in emotion recognition in the context of psychological disorders and for mapping treatment progress. PMID:23798981
Kujawa, Autumn; Dougherty, Lea; Durbin, C Emily; Laptook, Rebecca; Torpey, Dana; Klein, Daniel N
2014-02-01
Emotion knowledge in childhood has been shown to predict social functioning and psychological well-being, but relatively little is known about parental factors that influence its development in early childhood. There is some evidence that both parenting behavior and maternal depression are associated with emotion recognition, but previous research has only examined these factors independently. The current study assessed auditory and visual emotion recognition ability among a large sample of preschool children to examine typical emotion recognition skills in children of this age, as well as the independent and interactive effects of maternal and paternal depression and negative parenting (i.e., hostility and intrusiveness). Results indicated that children were most accurate at identifying happy emotional expressions. The lowest accuracy was observed for neutral expressions. A significant interaction was found between maternal depression and negative parenting behavior: children with a maternal history of depression were particularly sensitive to the negative effects of maladaptive parenting behavior on emotion recognition ability. No significant effects were found for paternal depression. These results highlight the importance of examining the effects of multiple interacting factors on children's emotional development and provide suggestions for identifying children for targeted preventive interventions.
Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi
2017-09-01
People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p < 0.05) and no significant changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.
ERIC Educational Resources Information Center
Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn
2014-01-01
Deficits in emotion recognition and social interaction characterize individuals with Asperger's Disorder (AS). Moreover they also appear to be less able to accurately use confidence to gauge their emotion recognition accuracy (i.e., metacognitive monitoring). The aim of this study was to extend this finding by considering both monitoring and…
Feasibility Testing of a Wearable Behavioral Aid for Social Learning in Children with Autism.
Daniels, Jena; Haber, Nick; Voss, Catalin; Schwartz, Jessey; Tamura, Serena; Fazel, Azar; Kline, Aaron; Washington, Peter; Phillips, Jennifer; Winograd, Terry; Feinstein, Carl; Wall, Dennis P
2018-01-01
Recent advances in computer vision and wearable technology have created an opportunity to introduce mobile therapy systems for autism spectrum disorders (ASD) that can respond to the increasing demand for therapeutic interventions; however, feasibility questions must be answered first. We studied the feasibility of a prototype therapeutic tool for children with ASD using Google Glass, examining whether children with ASD would wear such a device, if providing the emotion classification will improve emotion recognition, and how emotion recognition differs between ASD participants and neurotypical controls (NC). We ran a controlled laboratory experiment with 43 children: 23 with ASD and 20 NC. Children identified static facial images on a computer screen with one of 7 emotions in 3 successive batches: the first with no information about emotion provided to the child, the second with the correct classification from the Glass labeling the emotion, and the third again without emotion information. We then trained a logistic regression classifier on the emotion confusion matrices generated by the two information-free batches to predict ASD versus NC. All 43 children were comfortable wearing the Glass. ASD and NC participants who completed the computer task with Glass providing audible emotion labeling ( n = 33) showed increased accuracies in emotion labeling, and the logistic regression classifier achieved an accuracy of 72.7%. Further analysis suggests that the ability to recognize surprise, fear, and neutrality may distinguish ASD cases from NC. This feasibility study supports the utility of a wearable device for social affective learning in ASD children and demonstrates subtle differences in how ASD and NC children perform on an emotion recognition task. Schattauer GmbH Stuttgart.
Gender differences in emotion recognition: Impact of sensory modality and emotional category.
Lambrecht, Lena; Kreifelts, Benjamin; Wildgruber, Dirk
2014-04-01
Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose "alluring" more often when a stimulus was presented by a woman as compared to a man.
Gamond, L; Cattaneo, Z
2016-12-01
Consistent evidence suggests that emotional facial expressions are better recognized when the expresser and the perceiver belong to the same social group (in-group advantage). In this study, we used transcranial magnetic stimulation (TMS) to investigate the possible causal involvement of the dorsomedial prefrontal cortex (dmPFC) and of the right temporo-parietal junction (TPJ), two main nodes of the mentalizing neural network, in mediating the in-group advantage in emotion recognition. Participants performed an emotion discrimination task in a minimal (blue/green) group paradigm. We found that interfering with activity in the dmPFC significantly interfered with the effect of minimal group-membership on emotion recognition, reducing participants' ability to discriminate emotions expressed by in-group members. In turn, rTPJ mainly affected emotion discrimination per se, irrespective of group membership. Overall, our results point to a causal role of the dmPFC in mediating the in-group advantage in emotion recognition, favoring intragroup communication. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals
NASA Astrophysics Data System (ADS)
Lisetti, Christine Lætitia; Nasoz, Fatma
2004-12-01
We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature) and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement). We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.
Crossmodal and incremental perception of audiovisual cues to emotional speech.
Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc
2010-01-01
In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: 1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests with video clips of emotional utterances collected via a variant of the well-known Velten method. More specifically, we recorded speakers who displayed positive or negative emotions, which were congruent or incongruent with the (emotional) lexical content of the uttered sentence. In order to test this, we conducted two experiments. The first experiment is a perception experiment in which Czech participants, who do not speak Dutch, rate the perceived emotional state of Dutch speakers in a bimodal (audiovisual) or a unimodal (audio- or vision-only) condition. It was found that incongruent emotional speech leads to significantly more extreme perceived emotion scores than congruent emotional speech, where the difference between congruent and incongruent emotional speech is larger for the negative than for the positive conditions. Interestingly, the largest overall differences between congruent and incongruent emotions were found for the audio-only condition, which suggests that posing an incongruent emotion has a particularly strong effect on the spoken realization of emotions. The second experiment uses a gating paradigm to test the recognition speed for various emotional expressions from a speaker's face. In this experiment participants were presented with the same clips as experiment I, but this time presented vision-only. The clips were shown in successive segments (gates) of increasing duration. Results show that participants are surprisingly accurate in their recognition of the various emotions, as they already reach high recognition scores in the first gate (after only 160 ms). Interestingly, the recognition scores raise faster for positive than negative conditions. Finally, the gating results suggest that incongruent emotions are perceived as more intense than congruent emotions, as the former get more extreme recognition scores than the latter, already after a short period of exposure.
Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F
2012-06-01
The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the attribution of mental states. Our new result concerned the demonstration that the performances in the recognition of facial emotions are the best predictor of the performances in the attribution of beliefs. With Marshall et al.'s model on empathy, we can explain this link between the recognition of facial emotions and the comprehension of beliefs. Copyright © 2011 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.
Age-related differences in attention and memory toward emotional stimuli.
Bi, Dandan; Han, Buxin
2015-09-01
From the perspectives of time perception and motivation, socioemotional selectivity theory (SST) postulates that in comparison with younger adults, older adults tend to prefer positive stimuli and avoid negative stimuli. Currently the cross-cultural consistency of this positivity effect (PE) is still not clear. While empirical evidence for Western populations is accumulating, the validation of the PE in Asians is still rare. The current study compared 28 younger and 24 older Chinese adults in the processing of emotional information. Eye-tracking and recognition data of participants in processing pictures with positive, negative, or neutral emotional information sampled from the International Affection Picture System were collected. The results showed less negative bias for emotional attention in older adults than in younger adults, whereas for emotional recognition, only younger adults showed a negative bias while older adults showed no bias between negative and positive emotional information. Overall, compared with younger adults, emotional processing was more positive in older adults. It was concluded that Chinese older adults show a PE. © 2015 The Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.
Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola
2015-01-01
We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals. PMID:26557101
Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola
2015-01-01
We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals.
A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders
Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence
2015-01-01
Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928
Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.
Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J
2012-11-01
Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.
Assessing collective affect recognition via the Emotional Aperture Measure.
Sanchez-Burks, Jeffrey; Bartel, Caroline A; Rees, Laura; Huy, Quy
2016-01-01
Curiosity about collective affect is undergoing a revival in many fields. This literature, tracing back to Le Bon's seminal work on crowd psychology, has established the veracity of collective affect and demonstrated its influence on a wide range of group dynamics. More recently, an interest in the perception of collective affect has emerged, revealing a need for a methodological approach for assessing collective emotion recognition to complement measures of individual emotion recognition. This article addresses this need by introducing the Emotional Aperture Measure (EAM). Three studies provide evidence that collective affect recognition requires a processing style distinct from individual emotion recognition and establishes the validity and reliability of the EAM. A sample of working managers further shows how the EAM provides unique insights into how individuals interact with collectives. We discuss how the EAM can advance several lines of research on collective affect.
Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.
Wieckowski, Andrea Trubanova; White, Susan W
2017-01-01
Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.
Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean
2017-11-01
People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.
Yu, Shao Hua; Zhu, Jun Peng; Xu, You; Zheng, Lei Lei; Chai, Hao; He, Wei; Liu, Wei Bo; Li, Hui Chun; Wang, Wei
2012-12-01
To study the contribution of executive function to abnormal recognition of facial expressions of emotion in schizophrenia patients. Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test (WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness. Copyright © 2012 The Editorial Board of Biomedical and Environmental Sciences. Published by Elsevier B.V. All rights reserved.
Baran Tatar, Zeynep; Yargıç, İlhan; Oflaz, Serap; Büyükgök, Deniz
2015-01-01
Interpersonal relationship disorders in adults with Attention Deficit Hyperactivity Disorder (ADHD) can be associated with the impairment of non-verbal communication. The purpose of our study was to compare the emotion recognition, facial recognition and neuropsychological assessments of adult ADHD patients with those of healthy controls, and to thus determine the effect of neuropsychological data on the recognition of emotional expressions. This study, which was based on a case-control model, was conducted with patients diagnosed with ADHD according to the DSM-IV-TR, being followed and monitored at the adult ADHD clinic of the Psychiatry Department of the Istanbul University Istanbul Medical Faculty Hospital. The study group consisted of 40 adults (27.5% female) between the ages of 20-65 (mean age 25.96 ± 6.07; education level: 15.02±2.34 years) diagnosed with ADHD, and 40 controls who were matched/similar with the study group with respect to age, gender, and education level. In the ADHD group, 14 (35%) of the patients had concomitant diseases. Pictures of Facial Affect, the Benton Face Recognition Test, and the Continuous Performance Test were used to respectively evaluate emotion recognition, facial recognition, and attention deficit and impulsivity of the patients. It was determined that, in comparison to the control group, the ADHD group made more mistakes in recognizing all types of emotional expressions and neutral expressions. The ADHD group also demonstrated more cognitive mistakes. Facial recognition was similar in both groups. It was determined that impulsivity had a significant effect on facial recognition. The social relationship disorders observed in ADHD can be affected by emotion recognition processes. In future studies, it may be possible to investigate the effects that early psychopharmacological and psychotherapeutic interventions administered for the main symptoms of ADHD have on the impairment of emotion recognition.
Drapier, D; Péron, J; Leray, E; Sauleau, P; Biseul, I; Drapier, S; Le Jeune, F; Travers, D; Bourguignon, A; Haegelen, C; Millet, B; Vérin, M
2008-09-01
To test the hypothesis that emotion recognition and apathy share the same functional circuit involving the subthalamic nucleus (STN). A consecutive series of 17 patients with advanced Parkinson's disease (PD) was assessed 3 months before (M-3) and 3 months (M+3) after STN deep brain stimulation (DBS). Mean (+/-S.D.) age at surgery was 56.9 (8.7) years. Mean disease duration at surgery was 11.8 (2.6) years. Apathy was measured using the Apathy Evaluation Scale (AES) at both M-3 and M3. Patients were also assessed using a computerised paradigm of facial emotion recognition [Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto: Consulting Psychologist Press] before and after STN DBS. Prior to this, the Benton Facial Recognition Test was used to check that the ability to perceive faces was intact. Apathy had significantly worsened at M3 (42.5+/-8.9, p=0.006) after STN-DBS, in relation to the preoperative assessment (37.2+/-5.5). There was also a significant reduction in recognition percentages for facial expressions of fear (43.1%+/-22.9 vs. 61.6%+/-21.4, p=0.022) and sadness (52.7%+/-19.1 vs. 67.6%+/-22.8, p=0.031) after STN DBS. However, the postoperative worsening of apathy and emotion recognition impairment were not correlated. Our results confirm that the STN is involved in both the apathy and emotion recognition networks. However, the absence of any correlation between apathy and emotion recognition impairment suggests that the worsening of apathy following surgery could not be explained by a lack of facial emotion recognition and that its behavioural and cognitive components should therefore also be taken into consideration.
Emotional Recognition in Autism Spectrum Conditions from Voices and Faces
ERIC Educational Resources Information Center
Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne
2013-01-01
The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…
The Primacy of Perceiving: Emotion Recognition Buffers Negative Effects of Emotional Labor
ERIC Educational Resources Information Center
Bechtoldt, Myriam N.; Rohrmann, Sonja; De Pater, Irene E.; Beersma, Bianca
2011-01-01
There is ample empirical evidence for negative effects of emotional labor (surface acting and deep acting) on workers' well-being. This study analyzed to what extent workers' ability to recognize others' emotions may buffer these effects. In a 4-week study with 85 nurses and police officers, emotion recognition moderated the relationship between…
ERIC Educational Resources Information Center
Tell, Dina; Davidson, Denise
2015-01-01
In this research, the emotion recognition abilities of children with autism spectrum disorder and typically developing children were compared. When facial expressions and situational cues of emotion were congruent, accuracy in recognizing emotions was good for both children with autism spectrum disorder and typically developing children. When…
Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-04-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-01-01
Background Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Methods Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Results Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = −0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = −0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Conclusions Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns. PMID:25642389
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-02-01
Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = -0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = -0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns.
Liu, Yi-Hung; Wu, Chien-Te; Cheng, Wei-Teng; Hsiao, Yu-Tsung; Chen, Po-Ming; Teng, Jyh-Tong
2014-01-01
Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher's discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher's emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods. PMID:25061837
Liu, Yi-Hung; Wu, Chien-Te; Cheng, Wei-Teng; Hsiao, Yu-Tsung; Chen, Po-Ming; Teng, Jyh-Tong
2014-07-24
Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher's discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher's emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods.
Emotion Recognition in Fathers and Mothers at High-Risk for Child Physical Abuse
ERIC Educational Resources Information Center
Asla, Nagore; de Paul, Joaquin; Perez-Albeniz, Alicia
2011-01-01
Objective: The present study was designed to determine whether parents at high risk for physical child abuse, in comparison with parents at low risk, show deficits in emotion recognition, as well as to examine the moderator effect of gender and stress on the relationship between risk for physical child abuse and emotion recognition. Methods: Based…
ERIC Educational Resources Information Center
Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.
2013-01-01
Research Findings: The study examined children's recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children ("N" = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills…
ERIC Educational Resources Information Center
Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine
2008-01-01
We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…
ERIC Educational Resources Information Center
Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.
2013-01-01
The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…
Canli, Derya; Ozdemir, Hatice; Kocak, Orhan Murat
2015-08-01
Studies provide evidence for impaired social cognition in schizotypy and its association with negative symptoms. Cognitive features related to magical ideation - a component of the positive dimension of schizotypy - have been less investigated. We aimed to assess social cognitive functioning among adolescents with high magical ideation scores, mainly focusing on face and emotion recognition. 22 subjects with magical ideation scale scores above the cut off level and 22 controls with lowest scores from among 250 students screened with this scale were included in the study. A face and emotion recognition n-back test, the empathy quotient, theory of mind tests and the Physical Anhedonia Scale were applied to both magical ideation and control groups. The magical ideation group performed significantly worse than controls on both face and emotion recognition tests. Emotion recognition performance was found to be affected by memory load, with sadness, among emotions, revealing a difference between the two groups. Empathy and theory of mind tests did not distinguish the magical ideation group from controls. Our findings provide evidence for a deficit in negative emotion recognition affected by memory load associated with magical ideation in adolescents. Copyright © 2015 Elsevier Inc. All rights reserved.
On the Time Course of Vocal Emotion Recognition
Pell, Marc D.; Kotz, Sonja A.
2011-01-01
How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing. PMID:22087275
Facial emotion recognition is inversely correlated with tremor severity in essential tremor.
Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G
2014-04-01
We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.
Stability of facial emotion recognition performance in bipolar disorder.
Martino, Diego J; Samamé, Cecilia; Strejilevich, Sergio A
2016-09-30
The aim of this study was to assess the performance in emotional processing over time in a sample of euthymic patients with bipolar disorder (BD). Performance in the facial recognition of the six basic emotions (surprise, anger, sadness, happiness, disgust, and fear) did not change during a follow-up period of almost 7 years. These preliminary results suggest that performance in facial emotion recognition might be stable over time in BD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
2015-05-28
recognition is simpler and requires less computational resources compared to other inputs such as facial expressions . The Berlin database of Emotional ...Processing Magazine, IEEE, vol. 18, no. 1, pp. 32– 80, 2001. [15] K. R. Scherer, T. Johnstone, and G. Klasmeyer, “Vocal expression of emotion ...Network for Real-Time Speech- Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q
Vogel, Bastian D; Brück, Carolin; Jacob, Heike; Eberle, Mark; Wildgruber, Dirk
2016-07-07
Impaired interpretation of nonverbal emotional cues in patients with schizophrenia has been reported in several studies and a clinical relevance of these deficits for social functioning has been assumed. However, it is unclear to what extent the impairments depend on specific emotions or specific channels of nonverbal communication. Here, the effect of cue modality and emotional categories on accuracy of emotion recognition was evaluated in 21 patients with schizophrenia and compared to a healthy control group (n = 21). To this end, dynamic stimuli comprising speakers of both genders in three different sensory modalities (auditory, visual and audiovisual) and five emotional categories (happy, alluring, neutral, angry and disgusted) were used. Patients with schizophrenia were found to be impaired in emotion recognition in comparison to the control group across all stimuli. Considering specific emotions more severe deficits were revealed in the recognition of alluring stimuli and less severe deficits in the recognition of disgusted stimuli as compared to all other emotions. Regarding cue modality the extent of the impairment in emotional recognition did not significantly differ between auditory and visual cues across all emotional categories. However, patients with schizophrenia showed significantly more severe disturbances for vocal as compared to facial cues when sexual interest is expressed (alluring stimuli), whereas more severe disturbances for facial as compared to vocal cues were observed when happiness or anger is expressed. Our results confirmed that perceptual impairments can be observed for vocal as well as facial cues conveying various social and emotional connotations. The observed differences in severity of impairments with most severe deficits for alluring expressions might be related to specific difficulties in recognizing the complex social emotional information of interpersonal intentions as compared to "basic" emotional states. Therefore, future studies evaluating perception of nonverbal cues should consider a broader range of social and emotional signals beyond basic emotions including attitudes and interpersonal intentions. Identifying specific domains of social perception particularly prone for misunderstandings in patients with schizophrenia might allow for a refinement of interventions aiming at improving social functioning.
Emotional recognition from the speech signal for a virtual education agent
NASA Astrophysics Data System (ADS)
Tickle, A.; Raghu, S.; Elshaw, M.
2013-06-01
This paper explores the extraction of features from the speech wave to perform intelligent emotion recognition. A feature extract tool (openSmile) was used to obtain a baseline set of 998 acoustic features from a set of emotional speech recordings from a microphone. The initial features were reduced to the most important ones so recognition of emotions using a supervised neural network could be performed. Given that the future use of virtual education agents lies with making the agents more interactive, developing agents with the capability to recognise and adapt to the emotional state of humans is an important step.
Oerlemans, Anoek M; van der Meer, Jolanda M J; van Steijn, Daphne J; de Ruiter, Saskia W; de Bruijn, Yvette G E; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J
2014-05-01
Autism is a highly heritable and clinically heterogeneous neuropsychiatric disorder that frequently co-occurs with other psychopathologies, such as attention-deficit/hyperactivity disorder (ADHD). An approach to parse heterogeneity is by forming more homogeneous subgroups of autism spectrum disorder (ASD) patients based on their underlying, heritable cognitive vulnerabilities (endophenotypes). Emotion recognition is a likely endophenotypic candidate for ASD and possibly for ADHD. Therefore, this study aimed to examine whether emotion recognition is a viable endophenotypic candidate for ASD and to assess the impact of comorbid ADHD in this context. A total of 90 children with ASD (43 with and 47 without ADHD), 79 ASD unaffected siblings, and 139 controls aged 6-13 years, were included to test recognition of facial emotion and affective prosody. Our results revealed that the recognition of both facial emotion and affective prosody was impaired in children with ASD and aggravated by the presence of ADHD. The latter could only be partly explained by typical ADHD cognitive deficits, such as inhibitory and attentional problems. The performance of unaffected siblings could overall be considered at an intermediate level, performing somewhat worse than the controls and better than the ASD probands. Our findings suggest that emotion recognition might be a viable endophenotype in ASD and a fruitful target in future family studies of the genetic contribution to ASD and comorbid ADHD. Furthermore, our results suggest that children with comorbid ASD and ADHD are at highest risk for emotion recognition problems.
Altered brain mechanisms of emotion processing in pre-manifest Huntington's disease.
Novak, Marianne J U; Warren, Jason D; Henley, Susie M D; Draganski, Bogdan; Frackowiak, Richard S; Tabrizi, Sarah J
2012-04-01
Huntington's disease is an inherited neurodegenerative disease that causes motor, cognitive and psychiatric impairment, including an early decline in ability to recognize emotional states in others. The pathophysiology underlying the earliest manifestations of the disease is not fully understood; the objective of our study was to clarify this. We used functional magnetic resonance imaging to investigate changes in brain mechanisms of emotion recognition in pre-manifest carriers of the abnormal Huntington's disease gene (subjects with pre-manifest Huntington's disease): 16 subjects with pre-manifest Huntington's disease and 14 control subjects underwent 1.5 tesla magnetic resonance scanning while viewing pictures of facial expressions from the Ekman and Friesen series. Disgust, anger and happiness were chosen as emotions of interest. Disgust is the emotion in which recognition deficits have most commonly been detected in Huntington's disease; anger is the emotion in which impaired recognition was detected in the largest behavioural study of emotion recognition in pre-manifest Huntington's disease to date; and happiness is a positive emotion to contrast with disgust and anger. Ekman facial expressions were also used to quantify emotion recognition accuracy outside the scanner and structural magnetic resonance imaging with voxel-based morphometry was used to assess the relationship between emotion recognition accuracy and regional grey matter volume. Emotion processing in pre-manifest Huntington's disease was associated with reduced neural activity for all three emotions in partially separable functional networks. Furthermore, the Huntington's disease-associated modulation of disgust and happiness processing was negatively correlated with genetic markers of pre-manifest disease progression in distributed, largely extrastriatal networks. The modulated disgust network included insulae, cingulate cortices, pre- and postcentral gyri, precunei, cunei, bilateral putamena, right pallidum, right thalamus, cerebellum, middle frontal, middle occipital, right superior and left inferior temporal gyri, and left superior parietal lobule. The modulated happiness network included postcentral gyri, left caudate, right cingulate cortex, right superior and inferior parietal lobules, and right superior frontal, middle temporal, middle occipital and precentral gyri. These effects were not driven merely by striatal dysfunction. We did not find equivalent associations between brain structure and emotion recognition, and the pre-manifest Huntington's disease cohort did not have a behavioural deficit in out-of-scanner emotion recognition relative to controls. In addition, we found increased neural activity in the pre-manifest subjects in response to all three emotions in frontal regions, predominantly in the middle frontal gyri. Overall, these findings suggest that pathophysiological effects of Huntington's disease may precede the development of overt clinical symptoms and detectable cerebral atrophy.
Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain.
Zhuang, Ning; Zeng, Ying; Tong, Li; Zhang, Chi; Zhang, Hanming; Yan, Bin
2017-01-01
This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.
Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals.
Zhuang, Ning; Zeng, Ying; Yang, Kai; Zhang, Chi; Tong, Li; Yan, Bin
2018-03-12
Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.
Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals
Zeng, Ying; Yang, Kai; Tong, Li; Yan, Bin
2018-01-01
Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods. PMID:29534515
Parents' Emotion-Related Beliefs, Behaviours, and Skills Predict Children's Recognition of Emotion
ERIC Educational Resources Information Center
Castro, Vanessa L.; Halberstadt, Amy G.; Lozada, Fantasy T.; Craig, Ashley B.
2015-01-01
Children who are able to recognize others' emotions are successful in a variety of socioemotional domains, yet we know little about how school-aged children's abilities develop, particularly in the family context. We hypothesized that children develop emotion recognition skill as a function of parents' own emotion-related beliefs,…
ERIC Educational Resources Information Center
Golan, Ofer; Ashwin, Emma; Granader, Yael; McClintock, Suzy; Day, Kate; Leggett, Victoria; Baron-Cohen, Simon
2010-01-01
This study evaluated "The Transporters", an animated series designed to enhance emotion comprehension in children with autism spectrum conditions (ASC). n = 20 children with ASC (aged 4-7) watched "The Transporters" everyday for 4 weeks. Participants were tested before and after intervention on emotional vocabulary and emotion recognition at three…
ERIC Educational Resources Information Center
Williams, Beth T.; Gray, Kylie M.; Tonge, Bruce J.
2012-01-01
Background: Children with autism have difficulties in emotion recognition and a number of interventions have been designed to target these problems. However, few emotion training interventions have been trialled with young children with autism and co-morbid ID. This study aimed to evaluate the efficacy of an emotion training programme for a group…
A voxel-based lesion study on facial emotion recognition after penetrating brain injury
Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan
2013-01-01
The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440
Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long
2012-12-30
The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1
Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics.
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-21
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
A Comparison of Facial Emotion Processing in Neurological and Psychiatric Conditions
Bediou, Benoit; Brunelin, Jérôme; d’Amato, Thierry; Fecteau, Shirley; Saoud, Mohamed; Hénaff, Marie-Anne; Krolak-Salmon, Pierre
2012-01-01
Patients suffering from various neurological and psychiatric disorders show different levels of facial emotion recognition (FER) impairment, sometimes from the early phases of the disease. Investigating the relative severity of deficits in FER across different clinical and high-risk populations has potential implications for the diagnosis and treatment of these diseases, and could also allow us to understand the neurobiological mechanisms of emotion perception itself. To investigate the role of the dopaminergic system and of the frontotemporal network in FER, we reanalyzed and compared data from four of our previous studies investigating FER performance in patients with frontotemporal dysfunctions and/or dopaminergic system abnormalities at different stages. The performance of patients was compared to the performance obtained by a specific group of matched healthy controls using Cohen’s d effect size. We thus compared emotion and gender recognition in patients with frontotemporal dementia (FTD), amnestic mild cognitive impairment (aMCI), Alzheimer’s disease (AD) at the mild dementia stage, major depressive disorder, Parkinson’s disease treated by l-DOPA (PD-ON) or not (PD-OFF), remitted schizophrenia (SCZ-rem), first-episode schizophrenia treated by antipsychotic medication (SCZ-ON), and drug-naïve first-episode schizophrenia (SCZ-OFF), as well as in unaffected siblings of patients with schizophrenia (SIB). The analyses revealed a pattern of differential impairment of emotion (but not gender) recognition across pathological conditions. On the one hand, dopaminergic medication seems not to modify the moderate deficits observed in SCZ and PD groups (ON vs. OFF), suggesting that the deficit is independent from the dopaminergic system. On the other hand, the observed increase in effect size of the deficit among the aMCI, AD, and FTD groups (and also among the SIB and SCZ-rem groups) suggests that the deficit is dependent on neurodegeneration of the frontotemporal neural networks. Our transnosographic approach combining clinical and high-risk populations with the impact of medication provides new information on the trajectory of impaired emotion perception in neuropsychiatric conditions, and on the role of the dopaminergic system and the frontotemporal network in emotion perception. PMID:22493587
Emotional content enhances true but not false memory for categorized stimuli.
Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna
2013-04-01
Past research has shown that emotion enhances true memory, but that emotion can either increase or decrease false memory. Two theoretical possibilities-the distinctiveness of emotional stimuli and the conceptual relatedness of emotional content-have been implicated as being responsible for influencing both true and false memory for emotional content. In the present study, we sought to identify the mechanisms that underlie these mixed findings by equating the thematic relatedness of the study materials across each type of valence used (negative, positive, or neutral). In three experiments, categorically bound stimuli (e.g., funeral, pets, and office items) were used for this purpose. When the encoding task required the processing of thematic relatedness, a significant true-memory enhancement for emotional content emerged in recognition memory, but no emotional boost to false memory (exp. 1). This pattern persisted for true memory with a longer retention interval between study and test (24 h), and false recognition was reduced for emotional items (exp. 2). Finally, better recognition memory for emotional items once again emerged when the encoding task (arousal ratings) required the processing of the emotional aspect of the study items, with no emotional boost to false recognition (EXP. 3). Together, these findings suggest that when emotional and neutral stimuli are equivalently high in thematic relatedness, emotion continues to improve true memory, but it does not override other types of grouping to increase false memory.
Colzato, Lorenza S; Sellaro, Roberta; Beste, Christian
2017-07-01
Charles Darwin proposed that via the vagus nerve, the tenth cranial nerve, emotional facial expressions are evolved, adaptive and serve a crucial communicative function. In line with this idea, the later-developed polyvagal theory assumes that the vagus nerve is the key phylogenetic substrate that regulates emotional and social behavior. The polyvagal theory assumes that optimal social interaction, which includes the recognition of emotion in faces, is modulated by the vagus nerve. So far, in humans, it has not yet been demonstrated that the vagus plays a causal role in emotion recognition. To investigate this we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that modulates brain activity via bottom-up mechanisms. A sham/placebo-controlled, randomized cross-over within-subjects design was used to infer a causal relation between the stimulated vagus nerve and the related ability to recognize emotions as indexed by the Reading the Mind in the Eyes Test in 38 healthy young volunteers. Active tVNS, compared to sham stimulation, enhanced emotion recognition for easy items, suggesting that it promoted the ability to decode salient social cues. Our results confirm that the vagus nerve is causally involved in emotion recognition, supporting Darwin's argumentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease
Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul
2016-01-01
According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393
Tobe, Russell H; Corcoran, Cheryl M; Breland, Melissa; MacKay-Brandt, Anna; Klim, Casimir; Colcombe, Stanley J; Leventhal, Bennett L; Javitt, Daniel C
2016-08-01
Impairment in social cognition, including emotion recognition, has been extensively studied in both Autism Spectrum Disorders (ASD) and Schizophrenia (SZ). However, the relative patterns of deficit between disorders have been studied to a lesser degree. Here, we applied a social cognition battery incorporating both auditory (AER) and visual (VER) emotion recognition measures to a group of 19 high-functioning individuals with ASD relative to 92 individuals with SZ, and 73 healthy control adult participants. We examined group differences and correlates of basic auditory processing and processing speed. Individuals with SZ were impaired in both AER and VER while ASD individuals were impaired in VER only. In contrast to SZ participants, those with ASD showed intact basic auditory function. Our finding of a dissociation between AER and VER deficits in ASD relative to Sz support modality-specific theories of emotion recognition dysfunction. Future studies should focus on visual system-specific contributions to social cognitive impairment in ASD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Emotional memory and perception in temporal lobectomy patients with amygdala damage.
Brierley, B; Medford, N; Shaw, P; David, A S
2004-04-01
The human amygdala is implicated in the formation of emotional memories and the perception of emotional stimuli--particularly fear--across various modalities. To discern the extent to which these functions are related. 28 patients who had anterior temporal lobectomy (13 left and 15 right) for intractable epilepsy were recruited. Structural magnetic resonance imaging showed that three of them had atrophy of their remaining amygdala. All participants were given tests of affect perception from facial and vocal expressions and of emotional memory, using a standard narrative test and a novel test of word recognition. The results were standardised against matched healthy controls. Performance on all emotion tasks in patients with unilateral lobectomy ranged from unimpaired to moderately impaired. Perception of emotions in faces and voices was (with exceptions) significantly positively correlated, indicating multimodal emotional processing. However, there was no correlation between the subjects' performance on tests of emotional memory and perception. Several subjects showed strong emotional memory enhancement but poor fear perception. Patients with bilateral amygdala damage had greater impairment, particularly on the narrative test of emotional memory, one showing superior fear recognition but absent memory enhancement. Bilateral amygdala damage is particularly disruptive of emotional memory processes in comparison with unilateral temporal lobectomy. On a cognitive level, the pattern of results implies that perception of emotional expressions and emotional memory are supported by separate processing systems or streams.
Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang
2018-05-01
Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.
Emotion words and categories: evidence from lexical decision.
Scott, Graham G; O'Donnell, Patrick J; Sereno, Sara C
2014-05-01
We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion-frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency negative words demonstrated a similar advantage. In Experiments 2a and 2b, explicit categories ("positive," "negative," and "household" items) were specified to participants. Positive words again elicited faster responses than did neutral words. Responses to negative words, however, were no different than those to neutral words, regardless of their frequency. The overall pattern of effects indicates that positive words are always facilitated, frequency plays a greater role in the recognition of negative words, and a "negative" category represents a somewhat disparate set of emotions. These results support the notion that emotion word processing may be moderated by distinct systems.
Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven
2016-08-01
Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).
NASA Astrophysics Data System (ADS)
Clay, Alexis; Delord, Elric; Couture, Nadine; Domenger, Gaël
We describe the joint research that we conduct in gesture-based emotion recognition and virtual augmentation of a stage, bridging together the fields of computer science and dance. After establishing a common ground for dialogue, we could conduct a research process that equally benefits both fields. As computer scientists, dance is a perfect application case. Dancer's artistic creativity orient our research choices. As dancers, computer science provides new tools for creativity, and more importantly a new point of view that forces us to reconsider dance from its fundamentals. In this paper we hence describe our scientific work and its implications on dance. We provide an overview of our system to augment a ballet stage, taking a dancer's emotion into account. To illustrate our work in both fields, we describe three events that mixed dance, emotion recognition and augmented reality.
Early effects of duloxetine on emotion recognition in healthy volunteers
Bamford, Susan; Penton-Voak, Ian; Pinkney, Verity; Baldwin, David S; Munafò, Marcus R; Garner, Matthew
2015-01-01
The serotonin-noradrenaline reuptake inhibitor (SNRI) duloxetine is an effective treatment for major depression and generalised anxiety disorder. Neuropsychological models of antidepressant drug action suggest therapeutic effects might be mediated by the early correction of maladaptive biases in emotion processing, including the recognition of emotional expressions. Sub-chronic administration of duloxetine (for two weeks) produces adaptive changes in neural circuitry implicated in emotion processing; however, its effects on emotional expression recognition are unknown. Forty healthy participants were randomised to receive either 14 days of duloxetine (60 mg/day, titrated from 30 mg after three days) or matched placebo (with sham titration) in a double-blind, between-groups, repeated-measures design. On day 0 and day 14 participants completed a computerised emotional expression recognition task that measured sensitivity to the six primary emotions. Thirty-eight participants (19 per group) completed their course of tablets and were included in the analysis. Results provide evidence that duloxetine, compared to placebo, may reduce the accurate recognition of sadness. Drug effects were driven by changes in participants’ ability to correctly detect subtle expressions of sadness, with greater change observed in the placebo relative to the duloxetine group. These effects occurred in the absence of changes in mood. Our preliminary findings require replication, but complement recent evidence that sadness recognition is a therapeutic target in major depression, and a mechanism through which SNRIs could resolve negative biases in emotion processing to achieve therapeutic effects. PMID:25759400
Intelligibility of emotional speech in younger and older adults.
Dupuis, Kate; Pichora-Fuller, M Kathleen
2014-01-01
Little is known about the influence of vocal emotions on speech understanding. Word recognition accuracy for stimuli spoken to portray seven emotions (anger, disgust, fear, sadness, neutral, happiness, and pleasant surprise) was tested in younger and older listeners. Emotions were presented in either mixed (heterogeneous emotions mixed in a list) or blocked (homogeneous emotion blocked in a list) conditions. Three main hypotheses were tested. First, vocal emotion affects word recognition accuracy; specifically, portrayals of fear enhance word recognition accuracy because listeners orient to threatening information and/or distinctive acoustical cues such as high pitch mean and variation. Second, older listeners recognize words less accurately than younger listeners, but the effects of different emotions on intelligibility are similar across age groups. Third, blocking emotions in list results in better word recognition accuracy, especially for older listeners, and reduces the effect of emotion on intelligibility because as listeners develop expectations about vocal emotion, the allocation of processing resources can shift from emotional to lexical processing. Emotion was the within-subjects variable: all participants heard speech stimuli consisting of a carrier phrase followed by a target word spoken by either a younger or an older talker, with an equal number of stimuli portraying each of seven vocal emotions. The speech was presented in multi-talker babble at signal to noise ratios adjusted for each talker and each listener age group. Listener age (younger, older), condition (mixed, blocked), and talker (younger, older) were the main between-subjects variables. Fifty-six students (Mage= 18.3 years) were recruited from an undergraduate psychology course; 56 older adults (Mage= 72.3 years) were recruited from a volunteer pool. All participants had clinically normal pure-tone audiometric thresholds at frequencies ≤3000 Hz. There were significant main effects of emotion, listener age group, and condition on the accuracy of word recognition in noise. Stimuli spoken in a fearful voice were the most intelligible, while those spoken in a sad voice were the least intelligible. Overall, word recognition accuracy was poorer for older than younger adults, but there was no main effect of talker, and the pattern of the effects of different emotions on intelligibility did not differ significantly across age groups. Acoustical analyses helped elucidate the effect of emotion and some intertalker differences. Finally, all participants performed better when emotions were blocked. For both groups, performance improved over repeated presentations of each emotion in both blocked and mixed conditions. These results are the first to demonstrate a relationship between vocal emotion and word recognition accuracy in noise for younger and older listeners. In particular, the enhancement of intelligibility by emotion is greatest for words spoken to portray fear and presented heterogeneously with other emotions. Fear may have a specialized role in orienting attention to words heard in noise. This finding may be an auditory counterpart to the enhanced detection of threat information in visual displays. The effect of vocal emotion on word recognition accuracy is preserved in older listeners with good audiograms and both age groups benefit from blocking and the repetition of emotions.
Monetary incentives at retrieval promote recognition of involuntarily learned emotional information.
Yan, Chunping; Li, Yunyun; Zhang, Qin; Cui, Lixia
2018-03-07
Previous studies have suggested that the effects of reward on memory processes are affected by certain factors, but it remains unclear whether the effects of reward at retrieval on recognition processes are influenced by emotion. The event-related potential was used to investigate the combined effect of reward and emotion on memory retrieval and its neural mechanism. The behavioral results indicated that the reward at retrieval improved recognition performance under positive and negative emotional conditions. The event-related potential results indicated that there were significant interactions between the reward and emotion in the average amplitude during recognition, and the significant reward effects from the frontal to parietal brain areas appeared at 130-800 ms for positive pictures and at 190-800 ms for negative pictures, but there were no significant reward effects of neutral pictures; the reward effect of positive items appeared relatively earlier, starting at 130 ms, and that of negative pictures began at 190 ms. These results indicate that monetary incentives at retrieval promote recognition of involuntarily learned emotional information.
Emotion Recognition in Face and Body Motion in Bulimia Nervosa.
Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate
2017-11-01
Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
Recognition of emotion from body language among patients with unipolar depression
Loi, Felice; Vaidya, Jatin G.; Paradiso, Sergio
2013-01-01
Major depression may be associated with abnormal perception of emotions and impairment in social adaptation. Emotion recognition from body language and its possible implications to social adjustment have not been examined in patients with depression. Three groups of participants (51 with depression; 68 with history of depression in remission; and 69 never depressed healthy volunteers) were compared on static and dynamic tasks of emotion recognition from body language. Psychosocial adjustment was assessed using the Social Adjustment Scale Self-Report (SAS-SR). Participants with current depression showed reduced recognition accuracy for happy stimuli across tasks relative to remission and comparison participants. Participants with depression tended to show poorer psychosocial adaptation relative to remission and comparison groups. Correlations between perception accuracy of happiness and scores on the SAS-SR were largely not significant. These results indicate that depression is associated with reduced ability to appraise positive stimuli of emotional body language but emotion recognition performance is not tied to social adjustment. These alterations do not appear to be present in participants in remission suggesting state-like qualities. PMID:23608159
Gender differences in the relationship between social communication and emotion recognition.
Kothari, Radha; Skuse, David; Wakefield, Justin; Micali, Nadia
2013-11-01
To investigate the association between autistic traits and emotion recognition in a large community sample of children using facial and social motion cues, additionally stratifying by gender. A general population sample of 3,666 children from the Avon Longitudinal Study of Parents and Children (ALSPAC) were assessed on their ability to correctly recognize emotions using the faces subtest of the Diagnostic Analysis of Non-Verbal Accuracy, and the Emotional Triangles Task, a novel test assessing recognition of emotion from social motion cues. Children with autistic-like social communication difficulties, as assessed by the Social Communication Disorders Checklist, were compared with children without such difficulties. Autistic-like social communication difficulties were associated with poorer recognition of emotion from social motion cues in both genders, but were associated with poorer facial emotion recognition in boys only (odds ratio = 1.9, 95% CI = 1.4, 2.6, p = .0001). This finding must be considered in light of lower power to detect differences in girls. In this community sample of children, greater deficits in social communication skills are associated with poorer discrimination of emotions, implying there may be an underlying continuum of liability to the association between these characteristics. As a similar degree of association was observed in both genders on a novel test of social motion cues, the relatively good performance of girls on the more familiar task of facial emotion discrimination may be due to compensatory mechanisms. Our study might indicate the existence of a cognitive process by which girls with underlying autistic traits can compensate for their covert deficits in emotion recognition, although this would require further investigation. Copyright © 2013 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Wang, Pengyun; Li, Juan; Li, Huijie; Li, Bing; Jiang, Yang; Bao, Feng; Zhang, Shouzi
2013-11-01
This study investigated whether the observed absence of emotional memory enhancement in recognition tasks in patients with amnestic mild cognitive impairment (aMCI) could be related to their greater proportion of familiarity-based responses for all stimuli, and whether recognition tests with emotional items had better discriminative power for aMCI patients than those with neutral items. In total, 31 aMCI patients and 30 healthy older adults participated in a recognition test followed by remember/know judgments. Positive, neutral, and negative faces were used as stimuli. For overall recognition performance, emotional memory enhancement was found only in healthy controls; they remembered more negative and positive stimuli than neutral ones. For "remember" responses, we found equivalent emotional memory enhancement in both groups, though a greater proportion of "remember" responses was observed in normal controls. For "know" responses, aMCI patients presented a larger proportion than normal controls did, and their "know" responses were not affected by emotion. A negative correlation was found between emotional enhancement effect and the memory performance related to "know" responses. In addition, receiver operating characteristic curve analysis revealed higher diagnostic accuracy for recognition test with emotional stimuli than with neutral stimuli. The present results implied that the absence of the emotional memory enhancement effect in aMCI patients might be related to their tendency to rely more on familiarity-based "know" responses for all stimuli. Furthermore, recognition memory tests using emotional stimuli may be better able than neutral stimuli to differentiate people with aMCI from cognitively normal older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang
2017-12-01
Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of treatment. Impaired facial emotion recognition represents a neurocognitive risk factor that should be taken into account in alcohol dependence treatment. Treatments targeting the improvement of these social cognition deficits in AUD may offer a promising future approach. Copyright © 2017 by the Research Society on Alcoholism.
Stroud, J B; Freeman, T P; Leech, R; Hindocha, C; Lawn, W; Nutt, D J; Curran, H V; Carhart-Harris, R L
2018-02-01
Depressed patients robustly exhibit affective biases in emotional processing which are altered by SSRIs and predict clinical outcome. The objective of this study is to investigate whether psilocybin, recently shown to rapidly improve mood in treatment-resistant depression (TRD), alters patients' emotional processing biases. Seventeen patients with treatment-resistant depression completed a dynamic emotional face recognition task at baseline and 1 month later after two doses of psilocybin with psychological support. Sixteen controls completed the emotional recognition task over the same time frame but did not receive psilocybin. We found evidence for a group × time interaction on speed of emotion recognition (p = .035). At baseline, patients were slower at recognising facial emotions compared with controls (p < .001). After psilocybin, this difference was remediated (p = .208). Emotion recognition was faster at follow-up compared with baseline in patients (p = .004, d = .876) but not controls (p = .263, d = .302). In patients, this change was significantly correlated with a reduction in anhedonia over the same time period (r = .640, p = .010). Psilocybin with psychological support appears to improve processing of emotional faces in treatment-resistant depression, and this correlates with reduced anhedonia. Placebo-controlled studies are warranted to follow up these preliminary findings.
Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J
2013-12-01
Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.
Influence of gender in the recognition of basic facial expressions: A critical literature review
Forni-Santos, Larissa; Osório, Flávia L
2015-01-01
AIM: To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. METHODS: We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. RESULTS: In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. CONCLUSION: The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer’s gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation. PMID:26425447
Opioid system and human emotions.
Nummenmaa, Lauri; Tuominen, Lauri
2017-04-10
Emotions are states of vigilant readiness that guide human and animal behaviour during survival-salient situations. Categorical models of emotions posit neurally and physiologically distinct basic human emotions (anger, fear, disgust, happiness, sadness and surprise) that govern different survival functions. Opioid receptors are expressed abundantly in the mammalian emotion circuit, and the opioid system modulates a variety of functions related to arousal and motivation. Yet, its specific contribution to different basic emotions has remained poorly understood. Here, we review how the endogenous opioid system and particularly the μ receptor contribute to emotional processing in humans. Activation of the endogenous opioid system is consistently associated with both pleasant and unpleasant emotions. In general, exogenous opioid agonists facilitate approach-oriented emotions (anger, pleasure) and inhibit avoidance-oriented emotions (fear, sadness). Opioids also modulate social bonding and affiliative behaviour, and prolonged opioid abuse may render both social bonding and emotion recognition circuits dysfunctional. However, there is no clear evidence that the opioid system is able to affect the emotions associated with surprise and disgust. Taken together, the opioid systems contribute to a wide array of positive and negative emotions through their general ability to modulate the approach versus avoidance motivation associated with specific emotions. Because of the protective effects of opioid system-mediated prosociality and positive mood, the opioid system may constitute an important factor contributing to psychological and psychosomatic resilience. © 2017 The British Pharmacological Society.
Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M
2015-02-01
Noradrenaline interacts with stress hormones in the amygdala and hippocampus to enhance emotional memory consolidation, but the noradrenergic-glucocorticoid interaction at retrieval, where stress impairs memory, is less understood. We used a genetic neuroimaging approach to investigate whether a genetic variation of the noradrenergic system impacts stress-induced neural activity in amygdala and hippocampus during recognition of emotional memory. This study is based on genotype-dependent reanalysis of data from our previous publication (Li et al. Brain Imaging Behav 2014). Twenty-two healthy male volunteers were genotyped for the ADRA2B gene encoding the α2B-adrenergic receptor. Ten deletion carriers and 12 noncarriers performed an emotional face recognition task, while their brain activity was measured with fMRI. During encoding, 50 fearful and 50 neutral faces were presented. One hour later, they underwent either an acute stress (Trier Social Stress Test) or a control procedure which was followed immediately by the retrieval session, where participants had to discriminate between 100 old and 50 new faces. A genotype-dependent modulation of neural activity at retrieval was found in the bilateral amygdala and right hippocampus. Deletion carriers showed decreased neural activity in the amygdala when recognizing emotional faces in control condition and increased amygdala activity under stress. Noncarriers showed no differences in emotional modulated amygdala activation under stress or control. Instead, stress-induced increases during recognition of emotional faces were present in the right hippocampus. The genotype-dependent effects of acute stress on neural activity in amygdala and hippocampus provide evidence for noradrenergic-glucocorticoid interaction in emotional memory retrieval.
ERIC Educational Resources Information Center
Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
2013-01-01
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…
Multimodal approaches for emotion recognition: a survey
NASA Astrophysics Data System (ADS)
Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.
2004-12-01
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.
Empathy costs: Negative emotional bias in high empathisers.
Chikovani, George; Babuadze, Lasha; Iashvili, Nino; Gvalia, Tamar; Surguladze, Simon
2015-09-30
Excessive empathy has been associated with compassion fatigue in health professionals and caregivers. We investigated an effect of empathy on emotion processing in 137 healthy individuals of both sexes. We tested a hypothesis that high empathy may underlie increased sensitivity to negative emotion recognition which may interact with gender. Facial emotion stimuli comprised happy, angry, fearful, and sad faces presented at different intensities (mild and prototypical) and different durations (500ms and 2000ms). The parameters of emotion processing were represented by discrimination accuracy, response bias and reaction time. We found that higher empathy was associated with better recognition of all emotions. We also demonstrated that higher empathy was associated with response bias towards sad and fearful faces. The reaction time analysis revealed that higher empathy in females was associated with faster (compared with males) recognition of mildly sad faces of brief duration. We conclude that although empathic abilities were providing for advantages in recognition of all facial emotional expressions, the bias towards emotional negativity may potentially carry a risk for empathic distress. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Multimodal approaches for emotion recognition: a survey
NASA Astrophysics Data System (ADS)
Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.
2005-01-01
Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.
Sleep in Children Enhances Preferentially Emotional Declarative But Not Procedural Memories
ERIC Educational Resources Information Center
Prehn-Kristensen, Alexander; Goder, Robert; Chirobeja, Stefania; Bressman, Inka; Ferstl, Roman; Baving, Lioba
2009-01-01
Although the consolidation of several memory systems is enhanced by sleep in adults, recent studies suggest that sleep supports declarative memory but not procedural memory in children. In the current study, the influence of sleep on emotional declarative memory (recognition task) and procedural memory (mirror tracing task) in 20 healthy children…
Recognizing Biological Motion and Emotions from Point-Light Displays in Autism Spectrum Disorders
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P.; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in ‘reading’ body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of ‘biological motion’ and ‘emotions’ from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person’s ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance. PMID:22970227
Romero, Neri L
2017-06-01
A common social impairment in individuals with ASD is difficulty interpreting and or predicting emotions of others. To date, several interventions targeting teaching emotion recognition and understanding have been utilized both by researchers and practitioners. The results suggest that teaching emotion recognition is possible, but that the results do not generalize to non-instructional contexts. This study sought to replicate earlier findings of a positive impact of teaching emotion recognition using a computer-based intervention and to extend it by testing for generalization on live models in the classroom setting. Two boys and one girl, four to eight years in age, educated in self-contained classrooms for students with communication and social skills deficits, participated in this study. A multiple probe across participants design was utilized. Measures of emotion recognition and understanding were assessed at baseline, intervention, and one month post-intervention to determine maintenance effects. Social validity was assessed through parent and teacher questionnaires. All participants showed improvements in measures assessing their recognition of emotions in faces, generalized knowledge to live models, and maintained gains one month post intervention. These preliminary results are encouraging and should be utilized to inform a group design, in order to test efficacy with a larger population. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bat-Pitault, F; Da Fonseca, D; Flori, S; Porcher-Guinet, V; Stagnara, C; Patural, H; Franco, P; Deruelle, C
2017-10-01
The emotional process is characterized by a negative bias in depression, thus it was legitimate to establish if they same is true in very young at-risk children. Furthermore, sleep, also proposed as a marker of the depression risk, is closely linked in adults and adolescents with emotions. That is why we wanted first to better describe the characteristics of emotional recognition by 3-year-olds and their links with sleep. Secondly we observed, if found at this young age, an emotional recognition pattern indicating a vulnerability to depression. We studied, in 133 children aged 36 months from the AuBE cohort, the number of correct answers to the task of recognition of facial emotions (joy, anger and sadness). Cognitive functions were also assessed by the WPPSI III at 3 years old, and the different sleep parameters (time of light off and light on, sleep times, difficulty to go to sleep and number of parents' awakes per night) were described by questionnaires filled out by mothers at 6, 12, 18, 24 and 36 months after birth. Of these 133 children, 21 children whose mothers had at least one history of depression (13 boys) were the high-risk group and 19 children (8 boys) born to women with no history of depression were the low-risk group (or control group). Overall, 133 children by the age of 36 months recognize significantly better happiness than other emotions (P=0.000) with a better global recognition higher in girls (M=8.8) than boys (M=7.8) (P=0.013) and a positive correlation between global recognition ability and verbal IQ (P=0.000). Children who have less daytime sleep at 18 months and those who sleep less at 24 months show a better recognition of sadness (P=0.043 and P=0.042); those with difficulties at bedtime at 18 months recognize less happiness (P=0.043), and those who awaken earlier at 24 months have a better global recognition of emotions (P=0.015). Finally, the boys of the high-risk group recognize sadness better than boys in the control group (P=0.015). This study confirms that the recognition of emotion is related to development with a female advantage and a link with the language skills at 36 months of life. More importantly, we found a relationship between sleep characteristics and emotional recognition ability and a negative bias in emotional recognition in young males at risk for depression. Copyright © 2016 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Sellaro, Roberta; de Gelder, Beatrice; Finisguerra, Alessandra; Colzato, Lorenza S
2018-02-01
The polyvagal theory suggests that the vagus nerve is the key phylogenetic substrate enabling optimal social interactions, a crucial aspect of which is emotion recognition. A previous study showed that the vagus nerve plays a causal role in mediating people's ability to recognize emotions based on images of the eye region. The aim of this study is to verify whether the previously reported causal link between vagal activity and emotion recognition can be generalized to situations in which emotions must be inferred from images of whole faces and bodies. To this end, we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that causes the vagus nerve to fire by the application of a mild electrical stimulation to the auricular branch of the vagus nerve, located in the anterior protuberance of the outer ear. In two separate sessions, participants received active or sham tVNS before and while performing two emotion recognition tasks, aimed at indexing their ability to recognize emotions from facial and bodily expressions. Active tVNS, compared to sham stimulation, enhanced emotion recognition for whole faces but not for bodies. Our results confirm and further extend recent observations supporting a causal relationship between vagus nerve activity and the ability to infer others' emotional state, but restrict this association to situations in which the emotional state is conveyed by the whole face and/or by salient facial cues, such as eyes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Making sense of self-conscious emotion: linking theory of mind and emotion in children with autism.
Heerey, Erin A; Keltner, Dacher; Capps, Lisa M
2003-12-01
Self-conscious emotions such as embarrassment and shame are associated with 2 aspects of theory of mind (ToM): (a) the ability to understand that behavior has social consequences in the eyes of others and (b) an understanding of social norms violations. The present study aimed to link ToM with the recognition of self-conscious emotion. Children with and without autism identified facial expressions conscious of self-conscious and non-self-conscious emotions from photographs. ToM was also measured. Children with autism performed more poorly than comparison children at identifying self-conscious emotions, though they did not differ in the recognition of non-self-conscious emotions. When ToM ability was statistically controlled, group differences in the recognition of self-conscious emotion disappeared. Discussion focused on the links between ToM and self-conscious emotion.
Rapid communication: Global-local processing affects recognition of distractor emotional faces.
Srinivasan, Narayanan; Gupta, Rashmi
2011-03-01
Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.
Koelkebeck, Katja; Kohl, Waldemar; Luettgenau, Julia; Triantafillou, Susanna; Ohrmann, Patricia; Satoh, Shinji; Minoshita, Seiko
2015-07-30
A novel emotion recognition task that employs photos of a Japanese mask representing a highly ambiguous stimulus was evaluated. As non-Asians perceive and/or label emotions differently from Asians, we aimed to identify patterns of task-performance in non-Asian healthy volunteers with a view to future patient studies. The Noh mask test was presented to 42 adult German participants. Reaction times and emotion attribution patterns were recorded. To control for emotion identification abilities, a standard emotion recognition task was used among others. Questionnaires assessed personality traits. Finally, results were compared to age- and gender-matched Japanese volunteers. Compared to other tasks, German participants displayed slowest reaction times on the Noh mask test, indicating higher demands of ambiguous emotion recognition. They assigned more positive emotions to the mask than Japanese volunteers, demonstrating culture-dependent emotion identification patterns. As alexithymic and anxious traits were associated with slower reaction times, personality dimensions impacted on performance, as well. We showed an advantage of ambiguous over conventional emotion recognition tasks. Moreover, we determined emotion identification patterns in Western individuals impacted by personality dimensions, suggesting performance differences in clinical samples. Due to its properties, the Noh mask test represents a promising tool in the differential diagnosis of psychiatric disorders, e.g. schizophrenia. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Holding, Benjamin C; Laukka, Petri; Fischer, Håkan; Bänziger, Tanja; Axelsson, John; Sundelin, Tina
2017-11-01
Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task. Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization. Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1). The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Golan, Ofer; Baron-Cohen, Simon; Hill, Jacqueline
2006-02-01
Adults with Asperger Syndrome (AS) can recognise simple emotions and pass basic theory of mind tasks, but have difficulties recognising more complex emotions and mental states. This study describes a new battery of tasks, testing recognition of 20 complex emotions and mental states from faces and voices. The battery was given to males and females with AS and matched controls. Results showed the AS group performed worse than controls overall, on emotion recognition from faces and voices and on 12/20 specific emotions. Females recognised faces better than males regardless of diagnosis, and males with AS had more difficulties recognising emotions from faces than from voices. The implications of these results are discussed in relation to social functioning in AS.
Age- and gender-related variations of emotion recognition in pseudowords and faces.
Demenescu, Liliana R; Mathiak, Krystyna A; Mathiak, Klaus
2014-01-01
BACKGROUND/STUDY CONTEXT: The ability to interpret emotionally salient stimuli is an important skill for successful social functioning at any age. The objective of the present study was to disentangle age and gender effects on emotion recognition ability in voices and faces. Three age groups of participants (young, age range: 18-35 years; middle-aged, age range: 36-55 years; and older, age range: 56-75 years) identified basic emotions presented in voices and faces in a forced-choice paradigm. Five emotions (angry, fearful, sad, disgusted, and happy) and a nonemotional category (neutral) were shown as encoded in color photographs of facial expressions and pseudowords spoken in affective prosody. Overall, older participants had a lower accuracy rate in categorizing emotions than young and middle-aged participants. Females performed better than males in recognizing emotions from voices, and this gender difference emerged in middle-aged and older participants. The performance of emotion recognition in faces was significantly correlated with the performance in voices. The current study provides further evidence for a general age and gender effect on emotion recognition; the advantage of females seems to be age- and stimulus modality-dependent.
Emotion-attention interactions in recognition memory for distractor faces.
Srinivasan, Narayanan; Gupta, Rashmi
2010-04-01
Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.
Allgood, Rebecca; Heaton, Pamela
2015-09-01
Although the configurations of psychoacoustic cues signalling emotions in human vocalizations and instrumental music are very similar, cross-domain links in recognition performance have yet to be studied developmentally. Two hundred and twenty 5- to 10-year-old children were asked to identify musical excerpts and vocalizations as happy, sad, or fearful. The results revealed age-related increases in overall recognition performance with significant correlations across vocal and musical conditions at all developmental stages. Recognition scores were greater for musical than vocal stimuli and were superior in females compared with males. These results confirm that recognition of emotions in vocal and musical stimuli is linked by 5 years and that sensitivity to emotions in auditory stimuli is influenced by age and gender. © 2015 The British Psychological Society.
Recognition of face identity and emotion in expressive specific language impairment.
Merkenschlager, A; Amorosa, H; Kiefl, H; Martinius, J
2012-01-01
To study face and emotion recognition in children with mostly expressive specific language impairment (SLI-E). A test movie to study perception and recognition of faces and mimic-gestural expression was applied to 24 children diagnosed as suffering from SLI-E and an age-matched control group of normally developing children. Compared to a normal control group, the SLI-E children scored significantly worse in both the face and expression recognition tasks with a preponderant effect on emotion recognition. The performance of the SLI-E group could not be explained by reduced attention during the test session. We conclude that SLI-E is associated with a deficiency in decoding non-verbal emotional facial and gestural information, which might lead to profound and persistent problems in social interaction and development. Copyright © 2012 S. Karger AG, Basel.
[Recognition of facial expression of emotions in Parkinson's disease: a theoretical review].
Alonso-Recio, L; Serrano-Rodriguez, J M; Carvajal-Molina, F; Loeches-Alonso, A; Martin-Plasencia, P
2012-04-16
Emotional facial expression is a basic guide during social interaction and, therefore, alterations in their expression or recognition are important limitations for communication. To examine facial expression recognition abilities and their possible impairment in Parkinson's disease. First, we review the studies on this topic which have not found entirely similar results. Second, we analyze the factors that may explain these discrepancies and, in particular, as third objective, we consider the relationship between emotional recognition problems and cognitive impairment associated with the disease. Finally, we propose alternatives strategies for the development of studies that could clarify the state of these abilities in Parkinson's disease. Most studies suggest deficits in facial expression recognition, especially in those with negative emotional content. However, it is possible that these alterations are related to those that also appear in the course of the disease in other perceptual and executive processes. To advance in this issue, we consider necessary to design emotional recognition studies implicating differentially the executive or visuospatial processes, and/or contrasting cognitive abilities with facial expressions and non emotional stimuli. The precision of the status of these abilities, as well as increase our knowledge of the functional consequences of the characteristic brain damage in the disease, may indicate if we should pay special attention in their rehabilitation inside the programs implemented.
Intact anger recognition in depression despite aberrant visual facial information usage.
Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M
2014-08-01
Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.
Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V
2014-07-01
Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.
Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M
2014-12-01
Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.
Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini
2014-09-01
Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Enrici, Ivan; Adenzato, Mauro; Ardito, Rita B.; Mitkova, Antonia; Cavallo, Marco; Zibetti, Maurizio; Lopiano, Leonardo; Castelli, Lorys
2015-01-01
Background Parkinson’s disease (PD) is characterised by well-known motor symptoms, whereas the presence of cognitive non-motor symptoms, such as emotional disturbances, is still underestimated. One of the major problems in studying emotion deficits in PD is an atomising approach that does not take into account different levels of emotion elaboration. Our study addressed the question of whether people with PD exhibit difficulties in one or more specific dimensions of emotion processing, investigating three different levels of analyses, that is, recognition, representation, and regulation. Methodology Thirty-two consecutive medicated patients with PD and 25 healthy controls were enrolled in the study. Participants performed a three-level analysis assessment of emotional processing using quantitative standardised emotional tasks: the Ekman 60-Faces for emotion recognition, the full 36-item version of the Reading the Mind in the Eyes (RME) for emotion representation, and the 20-item Toronto Alexithymia Scale (TAS-20) for emotion regulation. Principal Findings Regarding emotion recognition, patients obtained significantly worse scores than controls in the total score of Ekman 60-Faces but not in any other basic emotions. For emotion representation, patients obtained significantly worse scores than controls in the RME experimental score but no in the RME gender control task. Finally, on emotion regulation, PD and controls did not perform differently at TAS-20 and no specific differences were found on TAS-20 subscales. The PD impairments on emotion recognition and representation do not correlate with dopamine therapy, disease severity, or with the duration of illness. These results are independent from other cognitive processes, such as global cognitive status and executive function, or from psychiatric status, such as depression, anxiety or apathy. Conclusions These results may contribute to better understanding of the emotional problems that are often seen in patients with PD and the measures used to test these problems, in particular on the use of different versions of the RME task. PMID:26110271
ERIC Educational Resources Information Center
Cebula, Katie R.; Wishart, Jennifer G.; Willis, Diane S.; Pitcairn, Tom K.
2017-01-01
Some children with Down syndrome may experience difficulties in recognizing facial emotions, particularly fear, but it is not clear why, nor how such skills can best be facilitated. Using a photo-matching task, emotion recognition was tested in children with Down syndrome, children with nonspecific intellectual disability and cognitively matched,…
Sawyer, Alyssa C P; Williamson, Paul; Young, Robyn L
2012-04-01
Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition deficit. This explanation was investigated using a newly developed emotion and mental state recognition task. Individuals with Asperger's Syndrome were less accurate at recognising emotions and mental states, but did not show evidence of gaze avoidance compared to individuals without Asperger's Syndrome. This suggests that the way individuals with Asperger's Syndrome look at faces cannot account for the difficulty they have recognising expressions.
State-dependent alteration in face emotion recognition in depression.
Anderson, Ian M; Shippen, Clare; Juhasz, Gabriella; Chase, Diana; Thomas, Emma; Downey, Darragh; Toth, Zoltan G; Lloyd-Williams, Kathryn; Elliott, Rebecca; Deakin, J F William
2011-04-01
Negative biases in emotional processing are well recognised in people who are currently depressed but are less well described in those with a history of depression, where such biases may contribute to vulnerability to relapse. To compare accuracy, discrimination and bias in face emotion recognition in those with current and remitted depression. The sample comprised a control group (n = 101), a currently depressed group (n = 30) and a remitted depression group (n = 99). Participants provided valid data after receiving a computerised face emotion recognition task following standardised assessment of diagnosis and mood symptoms. In the control group women were more accurate in recognising emotions than men owing to greater discrimination. Among participants with depression, those in remission correctly identified more emotions than controls owing to increased response bias, whereas those currently depressed recognised fewer emotions owing to decreased discrimination. These effects were most marked for anger, fear and sadness but there was no significant emotion × group interaction, and a similar pattern tended to be seen for happiness although not for surprise or disgust. These differences were confined to participants who were antidepressant-free, with those taking antidepressants having similar results to the control group. Abnormalities in face emotion recognition differ between people with current depression and those in remission. Reduced discrimination in depressed participants may reflect withdrawal from the emotions of others, whereas the increased bias in those with a history of depression could contribute to vulnerability to relapse. The normal face emotion recognition seen in those taking medication may relate to the known effects of antidepressants on emotional processing and could contribute to their ability to protect against depressive relapse.
Emotion Telepresence: Emotion Augmentation through Affective Haptics and Visual Stimuli
NASA Astrophysics Data System (ADS)
Tsetserukou, D.; Neviarouskaya, A.
2012-03-01
The paper focuses on a novel concept of emotional telepresence. The iFeel_IM! system which is in the vanguard of this technology integrates 3D virtual world Second Life, intelligent component for automatic emotion recognition from text messages, and innovative affective haptic interfaces providing additional nonverbal communication channels through simulation of emotional feedback and social touch (physical co-presence). Users can not only exchange messages but also emotionally and physically feel the presence of the communication partner (e.g., family member, friend, or beloved person). The next prototype of the system will include the tablet computer. The user can realize haptic interaction with avatar, and thus influence its mood and emotion of the partner. The finger gesture language will be designed for communication with avatar. This will bring new level of immersion of on-line communication.
Demirci, Esra; Erdogan, Ayten
2016-12-01
The objectives of this study were to evaluate both face and emotion recognition, to detect differences among attention deficit and hyperactivity disorder (ADHD) subgroups, to identify effects of the gender and to assess the effects of methylphenidate and atomoxetine treatment on both face and emotion recognition in patients with ADHD. The study sample consisted of 41 male, 29 female patients, 8-15 years of age, who were diagnosed as having combined type ADHD (N = 26), hyperactive/impulsive type ADHD (N = 21) or inattentive type ADHD (N = 23) but had not previously used any medication for ADHD and 35 male, 25 female healthy individuals. Long-acting methylphenidate (OROS-MPH) was prescribed to 38 patients, whereas atomoxetine was prescribed to 32 patients. The reading the mind in the eyes test (RMET) and Benton face recognition test (BFRT) were applied to all participants before and after treatment. The patients with ADHD had a significantly lower number of correct answers in child and adolescent RMET and in BFRT than the healthy controls. Among the ADHD subtypes, the hyperactive/impulsive subtype had a lower number of correct answers in the RMET than the inattentive subtypes, and the hyperactive/impulsive subtype had a lower number of correct answers in short and long form of BFRT than the combined and inattentive subtypes. Male and female patients with ADHD did not differ significantly with respect to the number of correct answers on the RMET and BFRT. The patients showed significant improvement in RMET and BFRT after treatment with OROS-MPH or atomoxetine. Patients with ADHD have difficulties in face recognition as well as emotion recognition. Both OROS-MPH and atomoxetine affect emotion recognition. However, further studies on the face and emotion recognition are needed in ADHD.
Postsurgical Disfigurement Influences Disgust Recognition: A Case-Control Study.
Lisan, Quentin; George, Nathalie; Hans, Stephane; Laccourreye, Ollivier; Lemogne, Cédric
Little is known about how emotion recognition may be modified in individuals prone to elicit disgust. We sought to determine if subjects with total laryngectomy would present a modified recognition of facial expressions of disgust. A total of 29 patients presenting with a history of advanced-stage laryngeal cancer were recruited, 17 being surgically treated (total laryngectomy) and 12 treated with chemoradiation therapy only. Based on a validated set of images of facial expressions of fear, disgust, surprise, happiness, sadness and anger displayed by 6 actors, we presented participants with expressions of each emotion at 5 levels of increasing intensity and measured their ability to recognize these emotions. Participants with (vs without) laryngectomy showed a higher threshold for the recognition of disgust (3.2. vs 2.7 images needed before emotion recognition, p = 0.03) and a lower success rate of correct recognition (75.5% vs 88.9%, p = 0.03). Subjects presenting with an aesthetic impairment of the head and neck showed poorer performance in disgust recognition when compared with those without disfigurement. These findings might relate either to some perceptual adaptation, habituation phenomenon, or to some higher-level processes related to emotion regulation strategies. Copyright © 2018 Academy of Consultation-Liaison Psychiatry. Published by Elsevier Inc. All rights reserved.
Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M
2017-05-01
This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bernaerts, Sylvie; Berra, Emmely; Wenderoth, Nicole; Alaerts, Kaat
2016-10-01
The neuropeptide 'oxytocin' (OT) is known to play a pivotal role in a variety of complex social behaviors by promoting a prosocial attitude and interpersonal bonding. One mechanism by which OT is hypothesized to promote prosocial behavior is by enhancing the processing of socially relevant information from the environment. With the present study, we explored to what extent OT can alter the 'reading' of emotional body language as presented by impoverished biological motion point light displays (PLDs). To do so, a double-blind between-subjects randomized placebo-controlled trial was conducted, assessing performance on a bodily emotion recognition task in healthy adult males before and after a single-dose of intranasal OT (24 IU). Overall, a single-dose of OT administration had a significant effect of medium size on emotion recognition from body language. OT-induced improvements in emotion recognition were not differentially modulated by the emotional valence of the presented stimuli (positive versus negative) and also, the overall tendency to label an observed emotional state as 'happy' (positive) or 'angry' (negative) was not modified by the administration of OT. Albeit moderate, the present findings of OT-induced improvements in bodily emotion recognition from whole-body PLD provide further support for a link between OT and the processing of socio-communicative cues originating from the body of others. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
White, Corey N.; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M.; Ratcliff, Roger
2014-01-01
Recognition memory studies often find that emotional items are more likely than neutral items to be labeled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium, or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorized words were presented in the lists. Similar, though weaker, effects were observed with categorized words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership. PMID:24303902
White, Corey N; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M; Ratcliff, Roger
2014-01-01
Recognition memory studies often find that emotional items are more likely than neutral items to be labelled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorised words were presented in the lists. Similar, though weaker, effects were observed with categorised words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership.
ERIC Educational Resources Information Center
Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2014-01-01
Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…
Coleman, Jonathan R.I.; Lester, Kathryn J.; Keers, Robert; Munafò, Marcus R.; Breen, Gerome
2017-01-01
Emotion recognition is disrupted in many mental health disorders, which may reflect shared genetic aetiology between this trait and these disorders. We explored genetic influences on emotion recognition and the relationship between these influences and mental health phenotypes. Eight‐year‐old participants (n = 4,097) from the Avon Longitudinal Study of Parents and Children (ALSPAC) completed the Diagnostic Analysis of Non‐Verbal Accuracy (DANVA) faces test. Genome‐wide genotype data was available from the Illumina HumanHap550 Quad microarray. Genome‐wide association studies were performed to assess associations with recognition of individual emotions and emotion in general. Exploratory polygenic risk scoring was performed using published genomic data for schizophrenia, bipolar disorder, depression, autism spectrum disorder, anorexia, and anxiety disorders. No individual genetic variants were identified at conventional levels of significance in any analysis although several loci were associated at a level suggestive of significance. SNP‐chip heritability analyses did not identify a heritable component of variance for any phenotype. Polygenic scores were not associated with any phenotype. The effect sizes of variants influencing emotion recognition are likely to be small. Previous studies of emotion identification have yielded non‐zero estimates of SNP‐heritability. This discrepancy is likely due to differences in the measurement and analysis of the phenotype. PMID:28608620
Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.
Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko
2008-01-01
The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.
Gender differences in facial emotion recognition in persons with chronic schizophrenia.
Weiss, Elisabeth M; Kohler, Christian G; Brensinger, Colleen M; Bilker, Warren B; Loughead, James; Delazer, Margarete; Nolan, Karen A
2007-03-01
The aim of the present study was to investigate possible sex differences in the recognition of facial expressions of emotion and to investigate the pattern of classification errors in schizophrenic males and females. Such an approach provides an opportunity to inspect the degree to which males and females differ in perceiving and interpreting the different emotions displayed to them and to analyze which emotions are most susceptible to recognition errors. Fifty six chronically hospitalized schizophrenic patients (38 men and 18 women) completed the Penn Emotion Recognition Test (ER40), a computerized emotion discrimination test presenting 40 color photographs of evoked happy, sad, anger, fear expressions and neutral expressions balanced for poser gender and ethnicity. We found a significant sex difference in the patterns of error rates in the Penn Emotion Recognition Test. Neutral faces were more commonly mistaken as angry in schizophrenic men, whereas schizophrenic women misinterpreted neutral faces more frequently as sad. Moreover, female faces were better recognized overall, but fear was better recognized in same gender photographs, whereas anger was better recognized in different gender photographs. The findings of the present study lend support to the notion that sex differences in aggressive behavior could be related to a cognitive style characterized by hostile attributions to neutral faces in schizophrenic men.
False recognition of facial expressions of emotion: causes and implications.
Fernández-Dols, José-Miguel; Carrera, Pilar; Barchard, Kimberly A; Gacitua, Marta
2008-08-01
This article examines the importance of semantic processes in the recognition of emotional expressions, through a series of three studies on false recognition. The first study found a high frequency of false recognition of prototypical expressions of emotion when participants viewed slides and video clips of nonprototypical fearful and happy expressions. The second study tested whether semantic processes caused false recognition. The authors found that participants made significantly higher error rates when asked to detect expressions that corresponded to semantic labels than when asked to detect visual stimuli. Finally, given that previous research reported that false memories are less prevalent in younger children, the third study tested whether false recognition of prototypical expressions increased with age. The authors found that 67% of eight- to nine-year-old children reported nonpresent prototypical expressions of fear in a fearful context, but only 40% of 6- to 7-year-old children did so. Taken together, these three studies demonstrate the importance of semantic processes in the detection and categorization of prototypical emotional expressions.
Leppänen, J M; Niehaus, D J H; Koen, L; Du Toit, E; Schoeman, R; Emsley, R
2006-06-01
Schizophrenia is associated with a deficit in the recognition of negative emotions from facial expressions. The present study examined the universality of this finding by studying facial expression recognition in African Xhosa population. Forty-four Xhosa patients with schizophrenia and forty healthy controls were tested with a computerized task requiring rapid perceptual discrimination of matched positive (i.e. happy), negative (i.e. angry), and neutral faces. Patients were equally accurate as controls in recognizing happy faces but showed a marked impairment in recognition of angry faces. The impairment was particularly pronounced for high-intensity (open-mouth) angry faces. Patients also exhibited more false happy and angry responses to neutral faces than controls. No correlation between level of education or illness duration and emotion recognition was found but the deficit in the recognition of negative emotions was more pronounced in familial compared to non-familial cases of schizophrenia. These findings suggest that the deficit in the recognition of negative facial expressions may constitute a universal neurocognitive marker of schizophrenia.
Children's understanding of facial expression of emotion: II. Drawing of emotion-faces.
Missaghi-Lakshman, M; Whissell, C
1991-06-01
67 children from Grades 2, 4, and 7 drew faces representing the emotional expressions of fear, anger, surprise, disgust, happiness, and sadness. The children themselves and 29 adults later decoded the drawings in an emotion-recognition task. Children were the more accurate decoders, and their accuracy and the accuracy of adults increased significantly for judgments of 7th-grade drawings. The emotions happy and sad were most accurately decoded. There were no significant differences associated with sex. In their drawings, children utilized a symbol system that seems to be based on a highlighting or exaggeration of features of the innately governed facial expression of emotion.
Emotional Memory Persists Longer than Event Memory
ERIC Educational Resources Information Center
Kuriyama, Kenichi; Soshi, Takahiro; Fujii, Takeshi; Kim, Yoshiharu
2010-01-01
The interaction between amygdala-driven and hippocampus-driven activities is expected to explain why emotion enhances episodic memory recognition. However, overwhelming behavioral evidence regarding the emotion-induced enhancement of immediate and delayed episodic memory recognition has not been obtained in humans. We found that the recognition…
Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.
2016-01-01
The study examined children’s recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children (N = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills, which included five tasks (three with faces and two with bodies). Parents and teachers reported on children’s aggressive behaviors and social skills. Children’s emotion accuracy on two of the three facial tasks and one of the body tasks was related to teacher reports of social skills. Some of these relations were moderated by child gender. In particular, the relationships between emotion recognition accuracy and reports of children’s behavior were stronger for boys than girls. Identifying preschool-aged children’s strengths and weaknesses in identification of emotion from faces and body poses may be helpful in guiding interventions with children who have problems with social and behavioral functioning that may be due, in part, to emotional knowledge deficits. Further developmental implications of these findings are discussed. PMID:27057129
ERIC Educational Resources Information Center
Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle
2008-01-01
The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…
Recognition of facial and musical emotions in Parkinson's disease.
Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N
2013-03-01
Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.
Social appraisal influences recognition of emotions.
Mumenthaler, Christian; Sander, David
2012-06-01
The notion of social appraisal emphasizes the importance of a social dimension in appraisal theories of emotion by proposing that the way an individual appraises an event is influenced by the way other individuals appraise and feel about the same event. This study directly tested this proposal by asking participants to recognize dynamic facial expressions of emotion (fear, happiness, or anger in Experiment 1; fear, happiness, anger, or neutral in Experiment 2) in a target face presented at the center of a screen while a contextual face, which appeared simultaneously in the periphery of the screen, expressed an emotion (fear, happiness, anger) or not (neutral) and either looked at the target face or not. We manipulated gaze direction to be able to distinguish between a mere contextual effect (gaze away from both the target face and the participant) and a specific social appraisal effect (gaze toward the target face). Results of both experiments provided evidence for a social appraisal effect in emotion recognition, which differed from the mere effect of contextual information: Whereas facial expressions were identical in both conditions, the direction of the gaze of the contextual face influenced emotion recognition. Social appraisal facilitated the recognition of anger, happiness, and fear when the contextual face expressed the same emotion. This facilitation was stronger than the mere contextual effect. Social appraisal also allowed better recognition of fear when the contextual face expressed anger and better recognition of anger when the contextual face expressed fear. 2012 APA, all rights reserved
Rigon, Arianna; Turkstra, Lyn; Mutlu, Bilge; Duff, Melissa
2016-10-01
Although moderate to severe traumatic brain injury (TBI) leads to facial affect recognition impairments in up to 39% of individuals, protective and risk factors for these deficits are unknown. The aim of the current study was to examine the effect of sex on emotion recognition abilities following TBI. We administered two separate emotion recognition tests (one static and one dynamic) to 53 individuals with moderate to severe TBI (females = 28) and 49 demographically matched comparisons (females = 22). We then investigated the presence of a sex-by-group interaction in emotion recognition accuracy. In the comparison group, there were no sex differences. In the TBI group, however, females significantly outperformed males in the dynamic (but not the static) task. Moreover, males (but not females) with TBI performed significantly worse than comparison participants in the dynamic task. Further analysis revealed that sex differences in emotion recognition abilities within the TBI group could not be explained by lesion location, TBI severity, or other neuropsychological variables. These findings suggest that sex may serve as a protective factor for social impairment following TBI and inform clinicians working with TBI as well as research on the neurophysiological correlates of sex differences in social functioning.
Dissociation between facial and bodily expressions in emotion recognition: A case study.
Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo
2017-12-21
Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.
Facial emotion recognition ability: psychiatry nurses versus nurses from other departments.
Gultekin, Gozde; Kincir, Zeliha; Kurt, Merve; Catal, Yasir; Acil, Asli; Aydin, Aybike; Özcan, Mualla; Delikkaya, Busra N; Kacar, Selma; Emul, Murat
2016-12-01
Facial emotion recognition is a basic element in non-verbal communication. Although some researchers have shown that recognizing facial expressions may be important in the interaction between doctors and patients, there are no studies concerning facial emotion recognition in nurses. Here, we aimed to investigate facial emotion recognition ability in nurses and compare the abilities between nurses from psychiatry and other departments. In this cross-sectional study, sixty seven nurses were divided into two groups according to their departments: psychiatry (n=31); and, other departments (n=36). A Facial Emotion Recognition Test, constructed from a set of photographs from Ekman and Friesen's book "Pictures of Facial Affect", was administered to all participants. In whole group, the highest mean accuracy rate of recognizing facial emotion was the happy (99.14%) while the lowest accurately recognized facial expression was fear (47.71%). There were no significant differences between two groups among mean accuracy rates in recognizing happy, sad, fear, angry, surprised facial emotion expressions (for all, p>0.05). The ability of recognizing disgusted and neutral facial emotions tended to be better in other nurses than psychiatry nurses (p=0.052 and p=0.053, respectively) Conclusion: This study was the first that revealed indifference in the ability of FER between psychiatry nurses and non-psychiatry nurses. In medical education curricula throughout the world, no specific training program is scheduled for recognizing emotional cues of patients. We considered that improving the ability of recognizing facial emotion expression in medical stuff might be beneficial in reducing inappropriate patient-medical stuff interaction.
Lysaker, Paul H; Hasson-Ohayon, Ilanit; Kravetz, Shlomo; Kent, Jerillyn S; Roe, David
2013-04-30
Many with schizophrenia have been found to experience difficulties recognizing a range of their own mental states including memories and emotions. While there is some evidence that the self perception of empathy in schizophrenia is often at odds with objective observations, little is known about the correlates of rates of concordance between self and rater assessments of empathy for this group. To explore this issue we gathered self and rater assessments of empathy in addition to assessments of emotion recognition using the Bell Lysaker Emotion Recognition Task, insight using the Scale to Assess Unawareness of Mental Disorder, and symptoms using the Positive and Negative Syndrome Scale from 91 adults diagnosed with schizophrenia spectrum disorders. Results revealed that participants with better emotion recognition, better insight, fewer positive symptoms and fewer depressive symptoms produced self ratings of empathy which were more strongly correlated with assessments of empathy performed by raters than participants with greater deficits in these domains. Results suggest that deficits in emotion recognition along with poor insight and higher levels of positive and depressive symptoms may affect the degree of agreement between self and rater assessments of empathy in schizophrenia. Published by Elsevier Ireland Ltd.
Breastfeeding experience differentially impacts recognition of happiness and anger in mothers.
Krol, Kathleen M; Kamboj, Sunjeev K; Curran, H Valerie; Grossmann, Tobias
2014-11-12
Breastfeeding is a dynamic biological and social process based on hormonal regulation involving oxytocin. While there is much work on the role of breastfeeding in infant development and on the role of oxytocin in socio-emotional functioning in adults, little is known about how breastfeeding impacts emotion perception during motherhood. We therefore examined whether breastfeeding influences emotion recognition in mothers. Using a dynamic emotion recognition task, we found that longer durations of exclusive breastfeeding were associated with faster recognition of happiness, providing evidence for a facilitation of processing positive facial expressions. In addition, we found that greater amounts of breastfed meals per day were associated with slower recognition of anger. Our findings are in line with current views of oxytocin function and support accounts that view maternal behaviour as tuned to prosocial responsiveness, by showing that vital elements of maternal care can facilitate the rapid responding to affiliative stimuli by reducing importance of threatening stimuli.
Guillery-Girard, Bérengère; Clochon, Patrice; Giffard, Bénédicte; Viard, Armelle; Egler, Pierre-Jean; Baleyte, Jean-Marc; Eustache, Francis; Dayan, Jacques
2013-09-01
"Travelling in time," a central feature of episodic memory is severely affected among individuals with Post Traumatic Stress Disorder (PTSD) with two opposite effects: vivid traumatic memories are unorganized in temporality (bottom-up processes), non-traumatic personal memories tend to lack spatio-temporal details and false recognitions occur more frequently that in the general population (top-down processes). To test the effect of these two types of processes (i.e. bottom-up and top-down) on emotional memory, we conducted two studies in healthy and traumatized adolescents, a period of life in which vulnerability to emotion is particularly high. Using negative and neutral images selected from the international affective picture system (IAPS), stimuli were divided into perceptual images (emotion generated by perceptual details) and conceptual images (emotion generated by the general meaning of the material). Both categories of stimuli were then used, along with neutral pictures, in a memory task with two phases (encoding and recognition). In both populations, we reported a differential effect of the emotional material on encoding and recognition. Negative perceptual scenes induced an attentional capture effect during encoding and enhanced the recollective distinctiveness. Conversely, the encoding of conceptual scenes was similar to neutral ones, but the conceptual relatedness induced false memories at retrieval. However, among individuals with PTSD, two subgroups of patients were identified. The first subgroup processed the scenes faster than controls, except for the perceptual scenes, and obtained similar performances to controls in the recognition task. The second subgroup group desmonstrated an attentional deficit in the encoding task with no benefit from the distinctiveness associated with negative perceptual scenes on memory performances. These findings provide a new perspective on how negative emotional information may have opposite influences on memory in normal and traumatized individuals. It also gives clues to understand how intrusive memories and overgeneralization takes place in PTSD. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hooker, Christine I; Bruce, Lori; Fisher, Melissa; Verosky, Sara C; Miyakawa, Asako; D'Esposito, Mark; Vinogradov, Sophia
2013-08-30
Both cognitive and social-cognitive deficits impact functional outcome in schizophrenia. Cognitive remediation studies indicate that targeted cognitive and/or social-cognitive training improves behavioral performance on trained skills. However, the neural effects of training in schizophrenia and their relation to behavioral gains are largely unknown. This study tested whether a 50-h intervention which included both cognitive and social-cognitive training would influence neural mechanisms that support social ccognition. Schizophrenia participants completed a computer-based intervention of either auditory-based cognitive training (AT) plus social-cognition training (SCT) (N=11) or non-specific computer games (CG) (N=11). Assessments included a functional magnetic resonance imaging (fMRI) task of facial emotion recognition, and behavioral measures of cognition, social cognition, and functional outcome. The fMRI results showed the predicted group-by-time interaction. Results were strongest for emotion recognition of happy, surprise and fear: relative to CG participants, AT+SCT participants showed a neural activity increase in bilateral amygdala, right putamen and right medial prefrontal cortex. Across all participants, pre-to-post intervention neural activity increase in these regions predicted behavioral improvement on an independent emotion perception measure (MSCEIT: Perceiving Emotions). Among AT+SCT participants alone, neural activity increase in right amygdala predicted behavioral improvement in emotion perception. The findings indicate that combined cognition and social-cognition training improves neural systems that support social-cognition skills. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
van Harmelen, Anne-Laura; van Tol, Marie-José; Dalgleish, Tim; van der Wee, Nic J A; Veltman, Dick J; Aleman, André; Spinhoven, Philip; Penninx, Brenda W J H; Elzinga, Bernet M
2014-12-01
Childhood emotional maltreatment (CEM) has adverse effects on medial prefrontal cortex (mPFC) morphology, a structure that is crucial for cognitive functioning and (emotional) memory and which modulates the limbic system. In addition, CEM has been linked to amygdala hyperactivity during emotional face processing. However, no study has yet investigated the functional neural correlates of neutral and emotional memory in adults reporting CEM. Using functional magnetic resonance imaging, we investigated CEM-related differential activations in mPFC during the encoding and recognition of positive, negative and neutral words. The sample (N = 194) consisted of patients with depression and/or anxiety disorders and healthy controls (HC) reporting CEM (n = 96) and patients and HC reporting no abuse (n = 98). We found a consistent pattern of mPFC hypoactivation during encoding and recognition of positive, negative and neutral words in individuals reporting CEM. These results were not explained by psychopathology or severity of depression or anxiety symptoms, or by gender, level of neuroticism, parental psychopathology, negative life events, antidepressant use or decreased mPFC volume in the CEM group. These findings indicate mPFC hypoactivity in individuals reporting CEM during emotional and neutral memory encoding and recognition. Our findings suggest that CEM may increase individuals' risk to the development of psychopathology on differential levels of processing in the brain; blunted mPFC activation during higher order processing and enhanced amygdala activation during automatic/lower order emotion processing. These findings are vital in understanding the long-term consequences of CEM. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Arnold, Aiden E G F; Iaria, Giuseppe; Goghari, Vina M
2016-02-28
Schizophrenia is associated with deficits in face perception and emotion recognition. Despite consistent behavioural results, the neural mechanisms underlying these cognitive abilities have been difficult to isolate, in part due to differences in neuroimaging methods used between studies for identifying regions in the face processing system. Given this problem, we aimed to validate a recently developed fMRI-based dynamic functional localizer task for use in studies of psychiatric populations and specifically schizophrenia. Previously, this functional localizer successfully identified each of the core face processing regions (i.e. fusiform face area, occipital face area, superior temporal sulcus), and regions within an extended system (e.g. amygdala) in healthy individuals. In this study, we tested the functional localizer success rate in 27 schizophrenia patients and in 24 community controls. Overall, the core face processing regions were localized equally between both the schizophrenia and control group. Additionally, the amygdala, a candidate brain region from the extended system, was identified in nearly half the participants from both groups. These results indicate the effectiveness of a dynamic functional localizer at identifying regions of interest associated with face perception and emotion recognition in schizophrenia. The use of dynamic functional localizers may help standardize the investigation of the facial and emotion processing system in this and other clinical populations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Emotion Analysis of Telephone Complaints from Customer Based on Affective Computing.
Gong, Shuangping; Dai, Yonghui; Ji, Jun; Wang, Jinzhao; Sun, Hai
2015-01-01
Customer complaint has been the important feedback for modern enterprises to improve their product and service quality as well as the customer's loyalty. As one of the commonly used manners in customer complaint, telephone communication carries rich emotional information of speeches, which provides valuable resources for perceiving the customer's satisfaction and studying the complaint handling skills. This paper studies the characteristics of telephone complaint speeches and proposes an analysis method based on affective computing technology, which can recognize the dynamic changes of customer emotions from the conversations between the service staff and the customer. The recognition process includes speaker recognition, emotional feature parameter extraction, and dynamic emotion recognition. Experimental results show that this method is effective and can reach high recognition rates of happy and angry states. It has been successfully applied to the operation quality and service administration in telecom and Internet service company.
Remembering the snake in the grass: Threat enhances recognition but not source memory.
Meyer, Miriam Magdalena; Bell, Raoul; Buchner, Axel
2015-12-01
Research on the influence of emotion on source memory has yielded inconsistent findings. The object-based framework (Mather, 2007) predicts that negatively arousing stimuli attract attention, resulting in enhanced within-object binding, and, thereby, enhanced source memory for intrinsic context features of emotional stimuli. To test this prediction, we presented pictures of threatening and harmless animals, the color of which had been experimentally manipulated. In a memory test, old-new recognition for the animals and source memory for their color was assessed. In all 3 experiments, old-new recognition was better for the more threatening material, which supports previous reports of an emotional memory enhancement. This recognition advantage was due to the emotional properties of the stimulus material, and not specific for snake stimuli. However, inconsistent with the prediction of the object-based framework, intrinsic source memory was not affected by emotion. (c) 2015 APA, all rights reserved).
Emotion Analysis of Telephone Complaints from Customer Based on Affective Computing
Gong, Shuangping; Ji, Jun; Wang, Jinzhao; Sun, Hai
2015-01-01
Customer complaint has been the important feedback for modern enterprises to improve their product and service quality as well as the customer's loyalty. As one of the commonly used manners in customer complaint, telephone communication carries rich emotional information of speeches, which provides valuable resources for perceiving the customer's satisfaction and studying the complaint handling skills. This paper studies the characteristics of telephone complaint speeches and proposes an analysis method based on affective computing technology, which can recognize the dynamic changes of customer emotions from the conversations between the service staff and the customer. The recognition process includes speaker recognition, emotional feature parameter extraction, and dynamic emotion recognition. Experimental results show that this method is effective and can reach high recognition rates of happy and angry states. It has been successfully applied to the operation quality and service administration in telecom and Internet service company. PMID:26633967
Emotion recognition from speech: tools and challenges
NASA Astrophysics Data System (ADS)
Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.
2015-05-01
Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.
ERIC Educational Resources Information Center
Brosnan, Mark; Johnson, Hilary; Grawmeyer, Beate; Chapman, Emma; Benton, Laura
2015-01-01
There is equivocal evidence as to whether there is a deficit in recognising emotional expressions in Autism spectrum disorder (ASD). This study compared emotion recognition in ASD in three types of emotion expression media (still image, dynamic image, auditory) across human stimuli (e.g. photo of a human face) and animated stimuli (e.g. cartoon…
Emotion recognition and oxytocin in patients with schizophrenia
Averbeck, B. B.; Bobin, T.; Evans, S.; Shergill, S. S.
2012-01-01
Background Studies have suggested that patients with schizophrenia are impaired at recognizing emotions. Recently, it has been shown that the neuropeptide oxytocin can have beneficial effects on social behaviors. Method To examine emotion recognition deficits in patients and see whether oxytocin could improve these deficits, we carried out two experiments. In the first experiment we recruited 30 patients with schizophrenia and 29 age- and IQ-matched control subjects, and gave them an emotion recognition task. Following this, we carried out a second experiment in which we recruited 21 patients with schizophrenia for a double-blind, placebo-controlled cross-over study of the effects of oxytocin on the same emotion recognition task. Results In the first experiment we found that patients with schizophrenia had a deficit relative to controls in recognizing emotions. In the second experiment we found that administration of oxytocin improved the ability of patients to recognize emotions. The improvement was consistent and occurred for most emotions, and was present whether patients were identifying morphed or non-morphed faces. Conclusions These data add to a growing literature showing beneficial effects of oxytocin on social–behavioral tasks, as well as clinical symptoms. PMID:21835090
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
The automaticity of emotion recognition.
Tracy, Jessica L; Robins, Richard W
2008-02-01
Evolutionary accounts of emotion typically assume that humans evolved to quickly and efficiently recognize emotion expressions because these expressions convey fitness-enhancing messages. The present research tested this assumption in 2 studies. Specifically, the authors examined (a) how quickly perceivers could recognize expressions of anger, contempt, disgust, embarrassment, fear, happiness, pride, sadness, shame, and surprise; (b) whether accuracy is improved when perceivers deliberate about each expression's meaning (vs. respond as quickly as possible); and (c) whether accurate recognition can occur under cognitive load. Across both studies, perceivers quickly and efficiently (i.e., under cognitive load) recognized most emotion expressions, including the self-conscious emotions of pride, embarrassment, and shame. Deliberation improved accuracy in some cases, but these improvements were relatively small. Discussion focuses on the implications of these findings for the cognitive processes underlying emotion recognition.
Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.
Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula
2016-03-01
When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Golan, Ofer; Ashwin, Emma; Granader, Yael; McClintock, Suzy; Day, Kate; Leggett, Victoria; Baron-Cohen, Simon
2010-03-01
This study evaluated The Transporters, an animated series designed to enhance emotion comprehension in children with autism spectrum conditions (ASC). n = 20 children with ASC (aged 4-7) watched The Transporters everyday for 4 weeks. Participants were tested before and after intervention on emotional vocabulary and emotion recognition at three levels of generalization. Two matched control groups of children (ASC group, n = 18 and typically developing group, n = 18) were also assessed twice without any intervention. The intervention group improved significantly more than the clinical control group on all task levels, performing comparably to typical controls at Time 2. We conclude that using The Transporters significantly improves emotion recognition in children with ASC. Future research should evaluate the series' effectiveness with lower-functioning individuals.
Emotion recognition in fathers and mothers at high-risk for child physical abuse.
Asla, Nagore; de Paúl, Joaquín; Pérez-Albéniz, Alicia
2011-09-01
The present study was designed to determine whether parents at high risk for physical child abuse, in comparison with parents at low risk, show deficits in emotion recognition, as well as to examine the moderator effect of gender and stress on the relationship between risk for physical child abuse and emotion recognition. Based on their scores on the Abuse Scale of the CAP Inventory (Milner, 1986), 64 parents at high risk (24 fathers and 40 mothers) and 80 parents at low risk (40 fathers and 40 mothers) for physical child abuse were selected. The Subtle Expression Training Tool/Micro Expression Training Tool (Ekman, 2004a, 2004b) and the Diagnostic Analysis of Nonverbal Accuracy II (Nowicki & Carton, 1993) were used to assess emotion recognition. As expected, parents at high risk, in contrast to parents at low risk, showed deficits in emotion recognition. However, differences between high- and low-risk participants were observed only for fathers, but not for mothers. Whereas fathers at high risk for physical child abuse made more errors than mothers at high risk, no differences between mothers at low risk and fathers at low risk were found. No interaction between stress, gender, and risk status was observed for errors in emotion recognition. The present findings, if confirmed with physical abusers, could be helpful to further our understanding of deficits in processing information of physically abusive parents and to develop treatment strategies specifically focused on emotion recognition. Moreover, if gender differences can be confirmed, the findings could be helpful to develop specific treatment programs for abusive fathers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Facial Emotion Recognition in Child Psychiatry: A Systematic Review
ERIC Educational Resources Information Center
Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen
2013-01-01
This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…
Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations.
Laukka, Petri; Elfenbein, Hillary Anger; Söder, Nela; Nordström, Henrik; Althoff, Jean; Chui, Wanda; Iraki, Frederick K; Rockstuhl, Thomas; Thingujam, Nutankumar S
2013-01-01
Which emotions are associated with universally recognized non-verbal signals?We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations.
Facial affect recognition deficit as a marker of genetic vulnerability to schizophrenia.
Alfimova, Margarita V; Abramova, Lilia I; Barhatova, Aleksandra I; Yumatova, Polina E; Lyachenko, Galina L; Golimbet, Vera E
2009-05-01
The aim of this study was to investigate the possibility that affect recognition impairments are associated with genetic liability to schizophrenia. In a group of 55 unaffected relatives of schizophrenia patients (parents and siblings) we examined the capacity to detect facially expressed emotions and its relationship to schizotypal personality, neurocognitive functioning, and the subject's actual emotional state. The relatives were compared with 103 schizophrenia patients and 99 healthy subjects without any family history of psychoses. Emotional stimuli were nine black-and-white photos of actors, who portrayed six basic emotions as well as interest, contempt, and shame. The results evidenced the affect recognition deficit in relatives, though milder than that in patients themselves. No correlation between the deficit and schizotypal personality measured with SPQ was detected in the group of relatives. Neither cognitive functioning, including attention, verbal memory and linguistic ability, nor actual emotional states accounted for their affect recognition impairments. The results suggest that the facial affect recognition deficit in schizophrenia may be related to genetic predisposition to the disorder and may serve as an endophenotype in molecular-genetic studies.
Is Emotion Recognition Impaired in Individuals with Autism Spectrum Disorders?
ERIC Educational Resources Information Center
Tracy, Jessica L.; Robins, Richard W.; Schriber, Roberta A.; Solomon, Marjorie
2011-01-01
Researchers have argued that individuals with autism spectrum disorders (ASDs) use an effortful "systematizing" process to recognize emotion expressions, whereas typically developing (TD) individuals use a more holistic process. If this is the case, individuals with ASDs should show slower and less efficient emotion recognition, particularly for…
Emotion Understanding in Children with ADHD
ERIC Educational Resources Information Center
Da Fonseca, David; Seguier, Valerie; Santos, Andreia; Poinso, Francois; Deruelle, Christine
2009-01-01
Several studies suggest that children with ADHD tend to perform worse than typically developing children on emotion recognition tasks. However, most of these studies have focused on the recognition of facial expression, while there is evidence that context plays a major role on emotion perception. This study aims at further investigating emotion…
Music Education Intervention Improves Vocal Emotion Recognition
ERIC Educational Resources Information Center
Mualem, Orit; Lavidor, Michal
2015-01-01
The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…
Teaching Emotion Recognition Skills to Children with Autism
ERIC Educational Resources Information Center
Ryan, Christian; Charragain, Caitriona Ni
2010-01-01
Autism is associated with difficulty interacting with others and an impaired ability to recognize facial expressions of emotion. Previous teaching programmes have not addressed weak central coherence. Emotion recognition training focused on components of facial expressions. The training was administered in small groups ranging from 4 to 7…
The level of cognitive function and recognition of emotions in older adults
Singh-Manoux, Archana; Batty, G. David; Ebmeier, Klaus P.; Jokela, Markus; Harmer, Catherine J.; Kivimäki, Mika
2017-01-01
Background The association between cognitive decline and the ability to recognise emotions in interpersonal communication is not well understood. We aimed to investigate the association between cognitive function and the ability to recognise emotions in other people’s facial expressions across the full continuum of cognitive capacity. Methods Cross-sectional analysis of 4039 participants (3016 men, 1023 women aged 59 to 82 years) in the Whitehall II study. Cognitive function was assessed using a 30-item Mini-Mental State Examination (MMSE), further classified into 8 groups: 30, 29, 28, 27, 26, 25, 24, and <24 (possible dementia) MMSE points. The Facial Expression Recognition Task (FERT) was used to examine recognition of anger, fear, disgust, sadness, and happiness. Results The multivariable adjusted difference in the percentage of accurate recognition between the highest and lowest MMSE group was 14.9 (95%CI, 11.1–18.7) for anger, 15.5 (11.9–19.2) for fear, 18.5 (15.2–21.8) for disgust, 11.6 (7.3–16.0) for sadness, and 6.3 (3.1–9.4) for happiness. However, recognition of several emotions was reduced already after 1 to 2-point reduction in MMSE and with further points down in MMSE, the recognition worsened at an accelerated rate. Conclusions The ability to recognize emotion in facial expressions is affected at an early stage of cognitive impairment and might decline at an accelerated rate with the deterioration of cognitive function. Accurate recognition of happiness seems to be less affected by a severe decline in cognitive performance than recognition of negatively valued emotions. PMID:28977015
Pistoia, Francesca; Carolei, Antonio; Sacco, Simona; Conson, Massimiliano; Pistarini, Caterina; Cazzulani, Benedetta; Stewart, Janet; Franceschini, Marco; Sarà, Marco
2015-12-15
There is much evidence to suggest that recognizing and sharing emotions with others require a first-hand experience of those emotions in our own body which, in turn, depends on the adequate perception of our own internal state (interoception) through preserved sensory pathways. Here we explored the contribution of interoception to first-hand emotional experiences and to the recognition of others' emotions. For this aim, 10 individuals with sensory deafferentation as a consequence of high spinal cord injury (SCI; five males and five females; mean age, 48 ± 14.8 years) and 20 healthy subjects matched for age, sex, and education were included in the study. Recognition of facial expressions and judgment of emotionally evocative scenes were investigated in both groups using the Ekman and Friesen set of Pictures of Facial Affect and the International Affective Picture System. A two-way mixed analysis of variance and post hoc comparisons were used to test differences among emotions and groups. Compared with healthy subjects, individuals with SCI, when asked to judge emotionally evocative scenes, had difficulties in judging their own emotional response to complex scenes eliciting fear and anger, while they were able to recognize the same emotions when conveyed by facial expressions. Our findings endorse a simulative view of emotional processing according to which the proper perception of our own internal state (interoception), through preserved sensory pathways, is crucial for first-hand experiences of the more primordial emotions, such as fear and anger.
Interference with facial emotion recognition by verbal but not visual loads.
Reed, Phil; Steed, Ian
2015-12-01
The ability to recognize emotions through facial characteristics is critical for social functioning, but is often impaired in those with a developmental or intellectual disability. The current experiments explored the degree to which interfering with the processing capacities of typically-developing individuals would produce a similar inability to recognize emotions through the facial elements of faces displaying particular emotions. It was found that increasing the cognitive load (in an attempt to model learning impairments in a typically developing population) produced deficits in correctly identifying emotions from facial elements. However, this effect was much more pronounced when using a concurrent verbal task than when employing a concurrent visual task, suggesting that there is a substantial verbal element to the labeling and subsequent recognition of emotions. This concurs with previous work conducted with those with developmental disabilities that suggests emotion recognition deficits are connected with language deficits. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dunn, Erin C; Crawford, Katherine M; Soare, Thomas W; Button, Katherine S; Raffeld, Miriam R; Smith, Andrew D A C; Penton-Voak, Ian S; Munafò, Marcus R
2018-03-07
Emotion recognition skills are essential for social communication. Deficits in these skills have been implicated in mental disorders. Prior studies of clinical and high-risk samples have consistently shown that children exposed to adversity are more likely than their unexposed peers to have emotion recognition skills deficits. However, only one population-based study has examined this association. We analyzed data from children participating in the Avon Longitudinal Study of Parents and Children, a prospective birth cohort (n = 6,506). We examined the association between eight adversities, assessed repeatedly from birth to age 8 (caregiver physical or emotional abuse; sexual or physical abuse; maternal psychopathology; one adult in the household; family instability; financial stress; parent legal problems; neighborhood disadvantage) and the ability to recognize facial displays of emotion measured using the faces subtest of the Diagnostic Assessment of Non-Verbal Accuracy (DANVA) at age 8.5 years. In addition to examining the role of exposure (vs. nonexposure) to each type of adversity, we also evaluated the role of the timing, duration, and recency of each adversity using a Least Angle Regression variable selection procedure. Over three-quarters of the sample experienced at least one adversity. We found no evidence to support an association between emotion recognition deficits and previous exposure to adversity, either in terms of total lifetime exposure, timing, duration, or recency, or when stratifying by sex. Results from the largest population-based sample suggest that even extreme forms of adversity are unrelated to emotion recognition deficits as measured by the DANVA, suggesting the possible immutability of emotion recognition in the general population. These findings emphasize the importance of population-based studies to generate generalizable results. © 2018 Association for Child and Adolescent Mental Health.
Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.
Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina
2017-01-01
Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.
von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L
2015-04-01
Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r(2) = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.
Williams, Beth T; Gray, Kylie M; Tonge, Bruce J
2012-12-01
Children with autism have difficulties in emotion recognition and a number of interventions have been designed to target these problems. However, few emotion training interventions have been trialled with young children with autism and co-morbid ID. This study aimed to evaluate the efficacy of an emotion training programme for a group of young children with autism with a range of intellectual ability. Participants were 55 children with autistic disorder, aged 4-7 years (FSIQ 42-107). Children were randomly assigned to an intervention (n = 28) or control group (n = 27). Participants in the intervention group watched a DVD designed to teach emotion recognition skills to children with autism (the Transporters), whereas the control group watched a DVD of Thomas the Tank Engine. Participants were assessed on their ability to complete basic emotion recognition tasks, mindreading and theory of mind (TOM) tasks before and after the 4-week intervention period, and at 3-month follow-up. Analyses controlled for the effect of chronological age, verbal intelligence, gender and DVD viewing time on outcomes. Children in the intervention group showed improved performance in the recognition of anger compared with the control group, with few improvements maintained at 3-month follow-up. There was no generalisation of skills to TOM or social skills. The Transporters programme showed limited efficacy in teaching basic emotion recognition skills to young children with autism with a lower range of cognitive ability. Improvements were limited to the recognition of expressions of anger, with poor maintenance of these skills at follow-up. These findings provide limited support for the efficacy of the Transporters programme for young children with autism of a lower cognitive range. © 2012 The Authors. Journal of Child Psychology and Psychiatry © 2012 Association for Child and Adolescent Mental Health.
Labelling Facial Affect in Context in Adults with and without TBI
Turkstra, Lyn S.; Kraning, Sarah G.; Riedeman, Sarah K.; Mutlu, Bilge; Duff, Melissa; VanDenHeuvel, Sara
2017-01-01
Recognition of facial affect has been studied extensively in adults with and without traumatic brain injury (TBI), mostly by asking examinees to match basic emotion words to isolated faces. This method may not capture affect labelling in everyday life when faces are in context and choices are open-ended. To examine effects of context and response format, we asked 148 undergraduate students to label emotions shown on faces either in isolation or in natural visual scenes. Responses were categorised as representing basic emotions, social emotions, cognitive state terms, or appraisals. We used students’ responses to create a scoring system that was applied prospectively to five men with TBI. In both groups, over 50% of responses were neither basic emotion words nor synonyms, and there was no significant difference in response types between faces alone vs. in scenes. Adults with TBI used labels not seen in students’ responses, talked more overall, and often gave multiple labels for one photo. Results suggest benefits of moving beyond forced-choice tests of faces in isolation to fully characterise affect recognition in adults with and without TBI. PMID:29093643
Kometer, Michael; Schmidt, André; Bachmann, Rosilla; Studerus, Erich; Seifritz, Erich; Vollenweider, Franz X
2012-12-01
Serotonin (5-HT) 1A and 2A receptors have been associated with dysfunctional emotional processing biases in mood disorders. These receptors further predominantly mediate the subjective and behavioral effects of psilocybin and might be important for its recently suggested antidepressive effects. However, the effect of psilocybin on emotional processing biases and the specific contribution of 5-HT2A receptors across different emotional domains is unknown. In a randomized, double-blind study, 17 healthy human subjects received on 4 separate days placebo, psilocybin (215 μg/kg), the preferential 5-HT2A antagonist ketanserin (50 mg), or psilocybin plus ketanserin. Mood states were assessed by self-report ratings, and behavioral and event-related potential measurements were used to quantify facial emotional recognition and goal-directed behavior toward emotional cues. Psilocybin enhanced positive mood and attenuated recognition of negative facial expression. Furthermore, psilocybin increased goal-directed behavior toward positive compared with negative cues, facilitated positive but inhibited negative sequential emotional effects, and valence-dependently attenuated the P300 component. Ketanserin alone had no effects but blocked the psilocybin-induced mood enhancement and decreased recognition of negative facial expression. This study shows that psilocybin shifts the emotional bias across various psychological domains and that activation of 5-HT2A receptors is central in mood regulation and emotional face recognition in healthy subjects. These findings may not only have implications for the pathophysiology of dysfunctional emotional biases but may also provide a framework to delineate the mechanisms underlying psylocybin's putative antidepressant effects. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Derntl, Birgit; Habel, Ute; Windischberger, Christian; Robinson, Simon; Kryspin-Exner, Ilse; Gur, Ruben C; Moser, Ewald
2009-08-04
The ability to recognize emotions in facial expressions relies on an extensive neural network with the amygdala as the key node as has typically been demonstrated for the processing of fearful stimuli. A sufficient characterization of the factors influencing and modulating amygdala function, however, has not been reached now. Due to lacking or diverging results on its involvement in recognizing all or only certain negative emotions, the influence of gender or ethnicity is still under debate. This high-resolution fMRI study addresses some of the relevant parameters, such as emotional valence, gender and poser ethnicity on amygdala activation during facial emotion recognition in 50 Caucasian subjects. Stimuli were color photographs of emotional Caucasian and African American faces. Bilateral amygdala activation was obtained to all emotional expressions (anger, disgust, fear, happy, and sad) and neutral faces across all subjects. However, only in males a significant correlation of amygdala activation and behavioral response to fearful stimuli was observed, indicating higher amygdala responses with better fear recognition, thus pointing to subtle gender differences. No significant influence of poser ethnicity on amygdala activation occurred, but analysis of recognition accuracy revealed a significant impact of poser ethnicity that was emotion-dependent. Applying high-resolution fMRI while subjects were performing an explicit emotion recognition task revealed bilateral amygdala activation to all emotions presented and neutral expressions. This mechanism seems to operate similarly in healthy females and males and for both in-group and out-group ethnicities. Our results support the assumption that an intact amygdala response is fundamental in the processing of these salient stimuli due to its relevance detecting function.
Morosan, Larisa; Badoud, Deborah; Zaharia, Alexandra; Brosch, Tobias; Eliez, Stephan; Bateman, Anthony; Heller, Patrick; Debbané, Martin
2017-01-01
Background Previous research suggests that antisocial individuals present impairment in social cognitive processing, more specifically in emotion recognition (ER) and perspective taking (PT). The first aim of the present study was to investigate the recognition of a wide range of emotional expressions and visual PT capacities in a group of incarcerated male adolescents in comparison to a matched group of community adolescents. Secondly, we sought to explore the relationship between these two mechanisms in relation to psychopathic traits. Methods Forty-five male adolescents (22 incarcerated adolescents (Mage = 16.52, SD = 0.96) and 23 community adolescents (Mage = 16.43, SD = 1.41)) participated in the study. ER abilities were measured using a dynamic and multimodal task that requires the participants to watch short videos in which trained actors express 14 emotions. PT capacities were examined using a task recognized and proven to be sensitive to adolescent development, where participants had to follow the directions of another person whilst taking into consideration his perspective. Results We found a main effect of group on emotion recognition scores. In comparison to the community adolescents, the incarcerated adolescents presented lower recognition of three emotions: interest, anxiety and amusement. Analyses also revealed significant impairments in PT capacities in incarcerated adolescents. In addition, incarcerated adolescents’ PT scores were uniquely correlated to their scores on recognition of interest. Conclusions The results corroborate previously reported impairments in ER and PT capacities, in the incarcerated adolescents. The study also indicates an association between impairments in the recognition of interest and impairments in PT. PMID:28122048
Challenges older adults face in detecting deceit: the role of emotion recognition.
Stanley, Jennifer Tehan; Blanchard-Fields, Fredda
2008-03-01
Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only. (c) 2008 APA, all rights reserved.
Buchanan, Tony W; Bibas, David; Adolphs, Ralph
2010-05-14
How do we recognize emotions from other people? One possibility is that our own emotional experiences guide us in the online recognition of emotion in others. A distinct but related possibility is that emotion experience helps us to learn how to recognize emotions in childhood. We explored these ideas in a large sample of people (N = 4,608) ranging from 5 to over 50 years old. Participants were asked to rate the intensity of emotional experience in their own lives, as well as to perform a task of facial emotion recognition. Those who reported more intense experience of fear and happiness were significantly more accurate (closer to prototypical) in recognizing facial expressions of fear and happiness, respectively, and intense experience of fear was associated also with more accurate recognition of surprised and happy facial expressions. The associations held across all age groups. These results suggest that the intensity of one's own emotional experience of fear and happiness correlates with the ability to recognize these emotions in others, and demonstrate such an association as early as age 5.
Effects of delta-9-tetrahydrocannabinol on evaluation of emotional images
Ballard, Michael E; Bedi, Gillinder; de Wit, Harriet
2013-01-01
There is growing evidence that drugs of abuse alter processing of emotional information in ways that could be attractive to users. Our recent report that Δ9-tetrahydrocannabinol (THC) diminishes amygdalar activation in response to threat-related faces suggests that THC may modify evaluation of emotionally-salient, particularly negative or threatening, stimuli. In this study, we examined the effects of acute THC on evaluation of emotional images. Healthy volunteers received two doses of THC (7.5 and 15 mg; p.o.) and placebo across separate sessions before performing tasks assessing facial emotion recognition and emotional responses to pictures of emotional scenes. THC significantly impaired recognition of facial fear and anger, but it only marginally impaired recognition of sadness and happiness. The drug did not consistently affect ratings of emotional scenes. THC' effects on emotional evaluation were not clearly related to its mood-altering effects. These results support our previous work, and show that THC reduces perception of facial threat. Nevertheless, THC does not appear to positively bias evaluation of emotional stimuli in general PMID:22585232
Effects of exposure to facial expression variation in face learning and recognition.
Liu, Chang Hong; Chen, Wenfeng; Ward, James
2015-11-01
Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.
Mood-congruent false memories persist over time.
Knott, Lauren M; Thorley, Craig
2014-01-01
In this study, we examined the role of mood-congruency and retention interval on the false recognition of emotion laden items using the Deese/Roediger-McDermott (DRM) paradigm. Previous research has shown a mood-congruent false memory enhancement during immediate recognition tasks. The present study examined the persistence of this effect following a one-week delay. Participants were placed in a negative or neutral mood, presented with negative-emotion and neutral-emotion DRM word lists, and administered with both immediate and delayed recognition tests. Results showed that a negative mood state increased remember judgments for negative-emotion critical lures, in comparison to neutral-emotion critical lures, on both immediate and delayed testing. These findings are discussed in relation to theories of spreading activation and emotion-enhanced memory, with consideration of the applied forensic implications of such findings.
Cross-cultural emotional prosody recognition: evidence from Chinese and British listeners.
Paulmann, Silke; Uskul, Ayse K
2014-01-01
This cross-cultural study of emotional tone of voice recognition tests the in-group advantage hypothesis (Elfenbein & Ambady, 2002) employing a quasi-balanced design. Individuals of Chinese and British background were asked to recognise pseudosentences produced by Chinese and British native speakers, displaying one of seven emotions (anger, disgust, fear, happy, neutral tone of voice, sad, and surprise). Findings reveal that emotional displays were recognised at rates higher than predicted by chance; however, members of each cultural group were more accurate in recognising the displays communicated by a member of their own cultural group than a member of the other cultural group. Moreover, the evaluation of error matrices indicates that both culture groups relied on similar mechanism when recognising emotional displays from the voice. Overall, the study reveals evidence for both universal and culture-specific principles in vocal emotion recognition.
Ziaei, Maryam; Peira, Nathalie; Persson, Jonas
2014-02-15
Goal-directed behavior requires that cognitive operations can be protected from emotional distraction induced by task-irrelevant emotional stimuli. The brain processes involved in attending to relevant information while filtering out irrelevant information are still largely unknown. To investigate the neural and behavioral underpinnings of attending to task-relevant emotional stimuli while ignoring irrelevant stimuli, we used fMRI to assess brain responses during attentional instructed encoding within an emotional working memory (WM) paradigm. We showed that instructed attention to emotion during WM encoding resulted in enhanced performance, by means of increased memory performance and reduced reaction time, compared to passive viewing. A similar performance benefit was also demonstrated for recognition memory performance, although for positive pictures only. Functional MRI data revealed a network of regions involved in directed attention to emotional information for both positive and negative pictures that included medial and lateral prefrontal cortices, fusiform gyrus, insula, the parahippocampal gyrus, and the amygdala. Moreover, we demonstrate that regions in the striatum, and regions associated with the default-mode network were differentially activated for emotional distraction compared to neutral distraction. Activation in a sub-set of these regions was related to individual differences in WM and recognition memory performance, thus likely contributing to performing the task at an optimal level. The present results provide initial insights into the behavioral and neural consequences of instructed attention and emotional distraction during WM encoding. © 2013.
Anticipation of Negative Pictures Enhances the P2 and P3 in Their Later Recognition
Lin, Huiyan; Xiang, Jing; Li, Saili; Liang, Jiafeng; Jin, Hua
2015-01-01
Anticipation of emotional pictures has been found to be relevant to the encoding of the pictures as well as their later recognition performance. However, it is as yet unknown whether anticipation modulates neural activity in the later recognition of emotional pictures. To address this issue, participants in the present study were asked to view emotional (negative or neutral) pictures. The picture was preceded by a cue which indicated the emotional content of the picture in half of the trials (the anticipated condition) and without any cues in the other half (the unanticipated condition). Subsequently, participants had to perform an unexpected old/new recognition task in which old and novel pictures were presented without any preceding cues. Electroencephalography data was recorded during the recognition phase. Event-related potential results showed that for negative pictures, P2 and P3 amplitudes were larger in the anticipated as compared to the unanticipated condition; whereas this anticipation effect was not shown for neutral pictures. The present findings suggest that anticipation of negative pictures may enhance neural activity in their later recognition. PMID:26648860
Anticipation of Negative Pictures Enhances the P2 and P3 in Their Later Recognition.
Lin, Huiyan; Xiang, Jing; Li, Saili; Liang, Jiafeng; Jin, Hua
2015-01-01
Anticipation of emotional pictures has been found to be relevant to the encoding of the pictures as well as their later recognition performance. However, it is as yet unknown whether anticipation modulates neural activity in the later recognition of emotional pictures. To address this issue, participants in the present study were asked to view emotional (negative or neutral) pictures. The picture was preceded by a cue which indicated the emotional content of the picture in half of the trials (the anticipated condition) and without any cues in the other half (the unanticipated condition). Subsequently, participants had to perform an unexpected old/new recognition task in which old and novel pictures were presented without any preceding cues. Electroencephalography data was recorded during the recognition phase. Event-related potential results showed that for negative pictures, P2 and P3 amplitudes were larger in the anticipated as compared to the unanticipated condition; whereas this anticipation effect was not shown for neutral pictures. The present findings suggest that anticipation of negative pictures may enhance neural activity in their later recognition.
Effect of Time Delay on Recognition Memory for Pictures: The Modulatory Role of Emotion
Wang, Bo
2014-01-01
This study investigated the modulatory role of emotion in the effect of time delay on recognition memory for pictures. Participants viewed neutral, positive and negative pictures, and took a recognition memory test 5 minutes, 24 hours, or 1 week after learning. The findings are: 1) For neutral, positive and negative pictures, overall recognition accuracy in the 5-min delay did not significantly differ from that in the 24-h delay. For neutral and positive pictures, overall recognition accuracy in the 1-week delay was lower than in the 24-h delay; for negative pictures, overall recognition in the 24-h and 1-week delay did not significantly differ. Therefore negative emotion modulates the effect of time delay on recognition memory, maintaining retention of overall recognition accuracy only within a certain frame of time. 2) For the three types of pictures, recollection and familiarity in the 5-min delay did not significantly differ from that in the 24-h and the 1-week delay. Thus emotion does not appear to modulate the effect of time delay on recollection and familiarity. However, recollection in the 24-h delay was higher than in the 1-week delay, whereas familiarity in the 24-h delay was lower than in the 1-week delay. PMID:24971457
Yao, Shih-Ying; Bull, Rebecca; Khng, Kiat Hui; Rahim, Anisa
2018-01-01
Understanding a child's ability to decode emotion expressions is important to allow early interventions for potential difficulties in social and emotional functioning. This study applied the Rasch model to investigate the psychometric properties of the NEPSY-II Affect Recognition subtest, a U.S. normed measure for 3-16 year olds which assesses the ability to recognize facial expressions of emotion. Data were collected from 1222 children attending preschools in Singapore. We first performed the Rasch analysis with the raw item data, and examined the technical qualities and difficulty pattern of the studied items. We subsequently investigated the relation of the estimated affect recognition ability from the Rasch analysis to a teacher-reported measure of a child's behaviors, emotions, and relationships. Potential gender differences were also examined. The Rasch model fits our data well. Also, the NEPSY-II Affect Recognition subtest was found to have reasonable technical qualities, expected item difficulty pattern, and desired association with the external measure of children's behaviors, emotions, and relationships for both boys and girls. Overall, findings from this study suggest that the NEPSY-II Affect Recognition subtest is a promising measure of young children's affect recognition ability. Suggestions for future test improvement and research were discussed.
Rigon, Arianna; Turkstra, Lyn; Mutlu, Bilge; Duff, Melissa
2018-01-01
Although moderate to severe traumatic brain injury (TBI) leads to facial affect recognition impairments in up to 39% of individuals, protective and risk factors for these deficits are unknown. The aim of the current study was to examine the effect of sex on emotion recognition abilities following TBI. We administered two separate emotion recognition tests (one static and one dynamic) to 53 individuals with moderate to severe TBI (Females=28) and 49 demographically matched comparisons (Females=22). We then investigated the presence of a sex-by-group interaction in emotion recognition accuracy. In the comparison group, there were no sex differences. In the TBI group, however, females significantly outperformed males in the dynamic (but not the static) task. Moreover, males (but not females) with TBI performed significantly worse than comparison participants in the dynamic task. Further analysis revealed that sex differences in emotion recognition abilities within the TBI group could not be explained by lesion location, TBI severity, or other neuropsychological variables. These findings suggest that sex may serve as a protective factor for social impairment following TBI and inform clinicians working with TBI as well as research on the neurophysiological correlates of sex differences in social functioning. PMID:27245826
Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese.
Stanley, Jennifer Tehan; Zhang, Xin; Fung, Helene H; Isaacowitz, Derek M
2013-02-01
We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye-tracking data suggest that, for some emotions, Americans attended more to the target faces, and they made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Cultural Differences in Gaze and Emotion Recognition: Americans Contrast More than Chinese
Tehan Stanley, Jennifer; Zhang, Xin; Fung, Helene H.; Isaacowitz, Derek M.
2014-01-01
We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye tracking data suggest that, for some emotions, Americans attended more to the target faces and made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PMID:22889414
Lau, Anna S; Fung, Joey; Wang, Shu-Wen; Kang, Sun-Mee
2009-01-01
Previous research has documented elevated levels of social anxiety in Asian American college students when compared with their European American peers. The authors hypothesized that higher symptoms among Asians could be explained by cultural differences in attunement to the emotional states of others. Socialization within interdependent cultures may cultivate concerns about accurately perceiving other's emotional responses, yet at the same time, norms governing emotional control may limit competencies in emotion recognition. A sample of 264 Asian American and European American college students completed measures of social anxiety, attunement concerns (shame socialization and loss of face), and attunement competencies (self-reported sensitivity and performance on emotion recognition tasks). Results confirmed that ethnic differences in social anxiety symptoms were mediated by differences in attunement concerns and competencies in emotion recognition. Asian American college students may find themselves in a double bind that leads to social unease because of a cultural emphasis on sensitivity to others' emotions in the midst of barriers to developing this attunement skill set.
Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J
2018-03-14
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals.
Wen, Guihua; Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition.
Random Deep Belief Networks for Recognizing Emotions from Speech Signals
Li, Huihui; Huang, Jubing; Li, Danyang; Xun, Eryang
2017-01-01
Now the human emotions can be recognized from speech signals using machine learning methods; however, they are challenged by the lower recognition accuracies in real applications due to lack of the rich representation ability. Deep belief networks (DBN) can automatically discover the multiple levels of representations in speech signals. To make full of its advantages, this paper presents an ensemble of random deep belief networks (RDBN) method for speech emotion recognition. It firstly extracts the low level features of the input speech signal and then applies them to construct lots of random subspaces. Each random subspace is then provided for DBN to yield the higher level features as the input of the classifier to output an emotion label. All outputted emotion labels are then fused through the majority voting to decide the final emotion label for the input speech signal. The conducted experimental results on benchmark speech emotion databases show that RDBN has better accuracy than the compared methods for speech emotion recognition. PMID:28356908
Automatic integration of social information in emotion recognition.
Mumenthaler, Christian; Sander, David
2015-04-01
This study investigated the automaticity of the influence of social inference on emotion recognition. Participants were asked to recognize dynamic facial expressions of emotion (fear or anger in Experiment 1 and blends of fear and surprise or of anger and disgust in Experiment 2) in a target face presented at the center of a screen while a subliminal contextual face appearing in the periphery expressed an emotion (fear or anger) or not (neutral) and either looked at the target face or not. Results of Experiment 1 revealed that recognition of the target emotion of fear was improved when a subliminal angry contextual face gazed toward-rather than away from-the fearful face. We replicated this effect in Experiment 2, in which facial expression blends of fear and surprise were more often and more rapidly categorized as expressing fear when the subliminal contextual face expressed anger and gazed toward-rather than away from-the target face. With the contextual face appearing for 30 ms in total, including only 10 ms of emotion expression, and being immediately masked, our data provide the first evidence that social influence on emotion recognition can occur automatically. (c) 2015 APA, all rights reserved).
Multimodal emotional state recognition using sequence-dependent deep hierarchical features.
Barros, Pablo; Jirak, Doreen; Weber, Cornelius; Wermter, Stefan
2015-12-01
Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Social cognition in schizophrenia and healthy aging: differences and similarities.
Silver, Henry; Bilker, Warren B
2014-12-01
Social cognition is impaired in schizophrenia but it is not clear whether this is specific for the illness and whether emotion perception is selectively affected. To study this we examined the perception of emotional and non-emotional clues in facial expressions, a key social cognitive skill, in schizophrenia patients and old healthy individuals using young healthy individuals as reference. Tests of object recognition, visual orientation, psychomotor speed, and working memory were included to allow multivariate analysis taking into account other cognitive functions Schizophrenia patients showed impairments in recognition of identity and emotional facial clues compared to young and old healthy groups. Severity was similar to that for object recognition and visuospatial processing. Older and younger healthy groups did not differ from each other on these tests. Schizophrenia patients and old healthy individuals were similarly impaired in the ability to automatically learn new faces during the testing procedure (measured by the CSTFAC index) compared to young healthy individuals. Social cognition is distinctly impaired in schizophrenia compared to healthy aging. Further study is needed to identify the mechanisms of automatic social cognitive learning impairment in schizophrenia patients and healthy aging individuals and determine whether similar neural systems are affected. Copyright © 2014 Elsevier B.V. All rights reserved.
An Investigation of Emotion Recognition and Theory of Mind in People with Chronic Heart Failure
Habota, Tina; McLennan, Skye N.; Cameron, Jan; Ski, Chantal F.; Thompson, David R.; Rendell, Peter G.
2015-01-01
Objectives Cognitive deficits are common in patients with chronic heart failure (CHF), but no study has investigated whether these deficits extend to social cognition. The present study provided the first empirical assessment of emotion recognition and theory of mind (ToM) in patients with CHF. In addition, it assessed whether each of these social cognitive constructs was associated with more general cognitive impairment. Methods A group comparison design was used, with 31 CHF patients compared to 38 demographically matched controls. The Ekman Faces test was used to assess emotion recognition, and the Mind in the Eyes test to measure ToM. Measures assessing global cognition, executive functions, and verbal memory were also administered. Results There were no differences between groups on emotion recognition or ToM. The CHF group’s performance was poorer on some executive measures, but memory was relatively preserved. In the CHF group, both emotion recognition performance and ToM ability correlated moderately with global cognition (r = .38, p = .034; r = .49, p = .005, respectively), but not with executive function or verbal memory. Conclusion CHF patients with lower cognitive ability were more likely to have difficulty recognizing emotions and inferring the mental states of others. Clinical implications of these findings are discussed. PMID:26529409
CACNA1C risk variant affects facial emotion recognition in healthy individuals.
Nieratschker, Vanessa; Brückmann, Christof; Plewnia, Christian
2015-11-27
Recognition and correct interpretation of facial emotion is essential for social interaction and communication. Previous studies have shown that impairments in this cognitive domain are common features of several psychiatric disorders. Recent association studies identified CACNA1C as one of the most promising genetic risk factors for psychiatric disorders and previous evidence suggests that the most replicated risk variant in CACNA1C (rs1006737) is affecting emotion recognition and processing. However, studies investigating the influence of rs1006737 on this intermediate phenotype in healthy subjects at the behavioral level are largely missing to date. Here, we applied the "Reading the Mind in the Eyes" test, a facial emotion recognition paradigm in a cohort of 92 healthy individuals to address this question. Whereas accuracy was not affected by genotype, CACNA1C rs1006737 risk-allele carries (AA/AG) showed significantly slower mean response times compared to individuals homozygous for the G-allele, indicating that healthy risk-allele carriers require more information to correctly identify a facial emotion. Our study is the first to provide evidence for an impairing behavioral effect of the CACNA1C risk variant rs1006737 on facial emotion recognition in healthy individuals and adds to the growing number of studies pointing towards CACNA1C as affecting intermediate phenotypes of psychiatric disorders.
Measuring Individual Differences in Sensitivities to Basic Emotions in Faces
ERIC Educational Resources Information Center
Suzuki, Atsunobu; Hoshino, Takahiro; Shigemasu, Kazuo
2006-01-01
The assessment of individual differences in facial expression recognition is normally required to address two major issues: (1) high agreement level (ceiling effect) and (2) differential difficulty levels across emotions. We propose a new assessment method designed to quantify individual differences in the recognition of the six basic emotions,…
Abnormal Facial Emotion Recognition in Depression: Serial Testing in an Ultra-Rapid-Cycling Patient.
ERIC Educational Resources Information Center
George, Mark S.; Huggins, Teresa; McDermut, Wilson; Parekh, Priti I.; Rubinow, David; Post, Robert M.
1998-01-01
Mood disorder subjects have a selective deficit in recognizing human facial emotion. Whether the facial emotion recognition errors persist during normal mood states (i.e., are state vs. trait dependent) was studied in one male bipolar II patient. Results of five sessions are presented and discussed. (Author/EMK)
Liberal Bias Mediates Emotion Recognition Deficits in Frontal Traumatic Brain Injury
ERIC Educational Resources Information Center
Callahan, Brandy L.; Ueda, Keita; Sakata, Daisuke; Plamondon, Andre; Murai, Toshiya
2011-01-01
It is well-known that patients having sustained frontal-lobe traumatic brain injury (TBI) are severely impaired on tests of emotion recognition. Indeed, these patients have significant difficulty recognizing facial expressions of emotion, and such deficits are often associated with decreased social functioning and poor quality of life. As of yet,…
Recognizing Emotions: Testing an Intervention for Children with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Richard, Donna Abely; More, William; Joy, Stephen P.
2015-01-01
A severely impaired capacity for social interaction is one of the characteristics of individuals with autism spectrum disorder (ASD). Deficits in facial emotional recognition processing may be associated with this limitation. The Build-a-Face (BAF) art therapy intervention was developed to assist with emotional recognition through the viewing and…
A Multimodal Approach to Emotion Recognition Ability in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Jones, Catherine R. G.; Pickles, Andrew; Falcaro, Milena; Marsden, Anita J. S.; Happe, Francesca; Scott, Sophie K.; Sauter, Disa; Tregay, Jenifer; Phillips, Rebecca J.; Baird, Gillian; Simonoff, Emily; Charman, Tony
2011-01-01
Background: Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal, hampered by small sample sizes, narrow IQ range and over-focus on the visual modality.…
ERIC Educational Resources Information Center
Arnold, Megan
2010-01-01
This research examined the emotion recognition abilities of preschoolers with and without hyperactivity and aggression. Previous research identified that school age children with Attention Deficit Hyperactivity Disorder (ADHD) have more difficulty understanding facial expressions associated with emotions, take longer than their age-matched peers…
ERIC Educational Resources Information Center
Uslu, Runa Idil; Kapci, Emine Gul; Yildirim, Revan; Oney, Esra
2010-01-01
Objectives: To develop an instrument which could assess the extent to which emotionally maltreating parental behavior was recognized by Turkish parents (Study 1) and to evaluate a group of parental and family characteristics that were related with their recognition of emotional maltreatment (Study 2). Methods: Self-administered instruments were…
Shih, Yu-Ling; Lin, Chia-Yen
2016-08-01
Action anticipation plays an important role in the successful performance of open skill sports, such as ball and combat sports. Evidence has shown that elite athletes of open sports excel in action anticipation. Most studies have targeted ball sports and agreed that information on body mechanics is one of the key determinants for successful action anticipation in open sports. However, less is known about combat sports, and whether facial emotions have an influence on athletes' action anticipation skill. It has been suggested that the understanding of intention in combat sports relies heavily on emotional context. Based on this suggestion, the present study compared the action anticipation performances of taekwondo athletes, weightlifting athletes, and non-athletes and then correlated these with their performances of emotion recognition. This study primarily found that accurate action anticipation does not necessarily rely on the dynamic information of movement, and that action anticipation performance is correlated with that of emotion recognition in taekwondo athletes, but not in weightlifting athletes. Our results suggest that the recognition of facial emotions plays a role in the action prediction in such combat sports as taekwondo.
Facial recognition in primary focal dystonia.
Rinnerthaler, Martina; Benecke, Cord; Bartha, Lisa; Entner, Tanja; Poewe, Werner; Mueller, Joerg
2006-01-01
The basal ganglia seem to be involved in emotional processing. Primary dystonia is a movement disorder considered to result from basal ganglia dysfunction, and the aim of the present study was to investigate emotion recognition in patients with primary focal dystonia. Thirty-two patients with primary cranial (n=12) and cervical (n=20) dystonia were compared to 32 healthy controls matched for age, sex, and educational level on the facially expressed emotion labeling (FEEL) test, a computer-based tool measuring a person's ability to recognize facially expressed emotions. Patients with cognitive impairment or depression were excluded. None of the patients received medication with a possible cognitive side effect profile and only those with mild to moderate dystonia were included. Patients with primary dystonia showed isolated deficits in the recognition of disgust (P=0.007), while no differences between patients and controls were found with regard to the other emotions (fear, happiness, surprise, sadness, and anger). The findings of the present study add further evidence to the conception that dystonia is not only a motor but a complex basal ganglia disorder including selective emotion recognition disturbances. Copyright (c) 2005 Movement Disorder Society.
Emotion recognition from multichannel EEG signals using K-nearest neighbor classification.
Li, Mi; Xu, Hongpei; Liu, Xingwang; Lu, Shengfu
2018-04-27
Many studies have been done on the emotion recognition based on multi-channel electroencephalogram (EEG) signals. This paper explores the influence of the emotion recognition accuracy of EEG signals in different frequency bands and different number of channels. We classified the emotional states in the valence and arousal dimensions using different combinations of EEG channels. Firstly, DEAP default preprocessed data were normalized. Next, EEG signals were divided into four frequency bands using discrete wavelet transform, and entropy and energy were calculated as features of K-nearest neighbor Classifier. The classification accuracies of the 10, 14, 18 and 32 EEG channels based on the Gamma frequency band were 89.54%, 92.28%, 93.72% and 95.70% in the valence dimension and 89.81%, 92.24%, 93.69% and 95.69% in the arousal dimension. As the number of channels increases, the classification accuracy of emotional states also increases, the classification accuracy of the gamma frequency band is greater than that of the beta frequency band followed by the alpha and theta frequency bands. This paper provided better frequency bands and channels reference for emotion recognition based on EEG.
A Diffusion Model Analysis of Decision Biases Affecting Delayed Recognition of Emotional Stimuli.
Bowen, Holly J; Spaniol, Julia; Patel, Ronak; Voss, Andreas
2016-01-01
Previous empirical work suggests that emotion can influence accuracy and cognitive biases underlying recognition memory, depending on the experimental conditions. The current study examines the effects of arousal and valence on delayed recognition memory using the diffusion model, which allows the separation of two decision biases thought to underlie memory: response bias and memory bias. Memory bias has not been given much attention in the literature but can provide insight into the retrieval dynamics of emotion modulated memory. Participants viewed emotional pictorial stimuli; half were given a recognition test 1-day later and the other half 7-days later. Analyses revealed that emotional valence generally evokes liberal responding, whereas high arousal evokes liberal responding only at a short retention interval. The memory bias analyses indicated that participants experienced greater familiarity with high-arousal compared to low-arousal items and this pattern became more pronounced as study-test lag increased; positive items evoke greater familiarity compared to negative and this pattern remained stable across retention interval. The findings provide insight into the separate contributions of valence and arousal to the cognitive mechanisms underlying delayed emotion modulated memory.
Effects of Power on Mental Rotation and Emotion Recognition in Women.
Nissan, Tali; Shapira, Oren; Liberman, Nira
2015-10-01
Based on construal-level theory (CLT) and its view of power as an instance of social distance, we predicted that high, relative to low power would enhance women's mental-rotation performance and impede their emotion-recognition performance. The predicted effects of power emerged both when it was manipulated via a recall priming task (Study 1) and environmental cues (Studies 2 and 3). Studies 3 and 4 found evidence for mediation by construal level of the effect of power on emotion recognition but not on mental rotation. We discuss potential mediating mechanisms for these effects based on both the social distance/construal level and the approach/inhibition views of power. We also discuss implications for optimizing performance on mental rotation and emotion recognition in everyday life. © 2015 by the Society for Personality and Social Psychology, Inc.
Modeling recall memory for emotional objects in Alzheimer's disease.
Sundstrøm, Martin
2011-07-01
To examine whether emotional memory (EM) of objects with self-reference in Alzheimer's disease (AD) can be modeled with binomial logistic regression in a free recall and an object recognition test to predict EM enhancement. Twenty patients with AD and twenty healthy controls were studied. Six objects (three presented as gifts) were shown to each participant. Ten minutes later, a free recall and a recognition test were applied. The recognition test had target-objects mixed with six similar distracter objects. Participants were asked to name any object in the recall test and identify each object in the recognition test as known or unknown. The total of gift objects recalled in AD patients (41.6%) was larger than neutral objects (13.3%) and a significant EM recall effect for gifts was found (Wilcoxon: p < .003). EM was not found for recognition in AD patients due to a ceiling effect. Healthy older adults scored overall higher in recall and recognition but showed no EM enhancement due to a ceiling effect. A logistic regression showed that likelihood of emotional recall memory can be modeled as a function of MMSE score (p < .014) and object status (p < .0001) as gift or non-gift. Recall memory was enhanced in AD patients for emotional objects indicating that EM in mild to moderate AD although impaired can be provoked with strong emotional load. The logistic regression model suggests that EM declines with the progression of AD rather than disrupts and may be a useful tool for evaluating magnitude of emotional load.
Impaired Perception of Emotional Expression in Amyotrophic Lateral Sclerosis.
Oh, Seong Il; Oh, Ki Wook; Kim, Hee Jin; Park, Jin Seok; Kim, Seung Hyun
2016-07-01
The increasing recognition that deficits in social emotions occur in amyotrophic lateral sclerosis (ALS) is helping to explain the spectrum of neuropsychological dysfunctions, thus supporting the view of ALS as a multisystem disorder involving neuropsychological deficits as well as motor deficits. The aim of this study was to characterize the emotion perception abilities of Korean patients with ALS based on the recognition of facial expressions. Twenty-four patients with ALS and 24 age- and sex-matched healthy controls completed neuropsychological tests and facial emotion recognition tasks [ChaeLee Korean Facial Expressions of Emotions (ChaeLee-E)]. The ChaeLee-E test includes facial expressions for seven emotions: happiness, sadness, anger, disgust, fear, surprise, and neutral. The ability to perceive facial emotions was significantly worse among ALS patients performed than among healthy controls [65.2±18.0% vs. 77.1±6.6% (mean±SD), p=0.009]. Eight of the 24 patients (33%) scored below the 5th percentile score of controls for recognizing facial emotions. Emotion perception deficits occur in Korean ALS patients, particularly regarding facial expressions of emotion. These findings expand the spectrum of cognitive and behavioral dysfunction associated with ALS into emotion processing dysfunction.
Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations
Laukka, Petri; Elfenbein, Hillary Anger; Söder, Nela; Nordström, Henrik; Althoff, Jean; Chui, Wanda; Iraki, Frederick K.; Rockstuhl, Thomas; Thingujam, Nutankumar S.
2013-01-01
Which emotions are associated with universally recognized non-verbal signals?We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations. PMID:23914178
Similar representations of emotions across faces and voices.
Kuhn, Lisa Katharina; Wydell, Taeko; Lavan, Nadine; McGettigan, Carolyn; Garrido, Lúcia
2017-09-01
[Correction Notice: An Erratum for this article was reported in Vol 17(6) of Emotion (see record 2017-18585-001). In the article, the copyright attribution was incorrectly listed and the Creative Commons CC-BY license disclaimer was incorrectly omitted from the author note. The correct copyright is "© 2017 The Author(s)" and the omitted disclaimer is below. All versions of this article have been corrected. "This article has been published under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Copyright for this article is retained by the author(s). Author(s) grant(s) the American Psychological Association the exclusive right to publish the article and identify itself as the original publisher."] Emotions are a vital component of social communication, carried across a range of modalities and via different perceptual signals such as specific muscle contractions in the face and in the upper respiratory system. Previous studies have found that emotion recognition impairments after brain damage depend on the modality of presentation: recognition from faces may be impaired whereas recognition from voices remains preserved, and vice versa. On the other hand, there is also evidence for shared neural activation during emotion processing in both modalities. In a behavioral study, we investigated whether there are shared representations in the recognition of emotions from faces and voices. We used a within-subjects design in which participants rated the intensity of facial expressions and nonverbal vocalizations for each of the 6 basic emotion labels. For each participant and each modality, we then computed a representation matrix with the intensity ratings of each emotion. These matrices allowed us to examine the patterns of confusions between emotions and to characterize the representations of emotions within each modality. We then compared the representations across modalities by computing the correlations of the representation matrices across faces and voices. We found highly correlated matrices across modalities, which suggest similar representations of emotions across faces and voices. We also showed that these results could not be explained by commonalities between low-level visual and acoustic properties of the stimuli. We thus propose that there are similar or shared coding mechanisms for emotions which may act independently of modality, despite their distinct perceptual inputs. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The Social Regulation of Emotion: An Integrative, Cross-Disciplinary Model.
Reeck, Crystal; Ames, Daniel R; Ochsner, Kevin N
2016-01-01
Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the 'social regulatory cycle', specifies at multiple levels of description the act of regulating another person's emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems. Copyright © 2015. Published by Elsevier Ltd.
The Social Regulation of Emotion: An Integrative, Cross-Disciplinary Model
Reeck, Crystal; Ames, Daniel R.; Ochsner, Kevin N.
2018-01-01
Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the ‘social regulatory cycle’, specifies at multiple levels of description the act of regulating another person’s emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems. PMID:26564248
The Relationship between Emotion Recognition Ability and Social Skills in Young Children with Autism
ERIC Educational Resources Information Center
Williams, Beth T.; Gray, Kylie M.
2013-01-01
This study assessed the relationship between emotion recognition ability and social skills in 42 young children with autistic disorder aged 4-7 years. The analyses revealed that accuracy in recognition of sadness, but not happiness, anger or fear, was associated with higher ratings on the Vineland-II Socialization domain, above and beyond the…
Emotion Recognition in Children and Adolescents with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Kuusikko, Sanna; Haapsamo, Helena; Jansson-Verkasalo, Eira; Hurtig, Tuula; Mattila, Marja-Leena; Ebeling, Hanna; Jussila, Katja; Bolte, Sven; Moilanen, Irma
2009-01-01
We examined upper facial basic emotion recognition in 57 subjects with autism spectrum disorders (ASD) (M = 13.5 years) and 33 typically developing controls (M = 14.3 years) by using a standardized computer-aided measure (The Frankfurt Test and Training of Facial Affect Recognition, FEFA). The ASD group scored lower than controls on the total…
Luo, Xin; Fu, Qian-Jie; Galvin, John J.
2007-01-01
The present study investigated the ability of normal-hearing listeners and cochlear implant users to recognize vocal emotions. Sentences were produced by 1 male and 1 female talker according to 5 target emotions: angry, anxious, happy, sad, and neutral. Overall amplitude differences between the stimuli were either preserved or normalized. In experiment 1, vocal emotion recognition was measured in normal-hearing and cochlear implant listeners; cochlear implant subjects were tested using their clinically assigned processors. When overall amplitude cues were preserved, normal-hearing listeners achieved near-perfect performance, whereas listeners with cochlear implant recognized less than half of the target emotions. Removing the overall amplitude cues significantly worsened mean normal-hearing and cochlear implant performance. In experiment 2, vocal emotion recognition was measured in listeners with cochlear implant as a function of the number of channels (from 1 to 8) and envelope filter cutoff frequency (50 vs 400 Hz) in experimental speech processors. In experiment 3, vocal emotion recognition was measured in normal-hearing listeners as a function of the number of channels (from 1 to 16) and envelope filter cutoff frequency (50 vs 500 Hz) in acoustic cochlear implant simulations. Results from experiments 2 and 3 showed that both cochlear implant and normal-hearing performance significantly improved as the number of channels or the envelope filter cutoff frequency was increased. The results suggest that spectral, temporal, and overall amplitude cues each contribute to vocal emotion recognition. The poorer cochlear implant performance is most likely attributable to the lack of salient pitch cues and the limited functional spectral resolution. PMID:18003871
Cerami, Chiara; Dodich, Alessandra; Iannaccone, Sandro; Marcone, Alessandra; Lettieri, Giada; Crespi, Chiara; Gianolli, Luigi; Cappa, Stefano F.; Perani, Daniela
2015-01-01
The behavioural variant of frontotemporal dementia (bvFTD) is a rare disease mainly affecting the social brain. FDG-PET fronto-temporal hypometabolism is a supportive feature for the diagnosis. It may also provide specific functional metabolic signatures for altered socio-emotional processing. In this study, we evaluated the emotion recognition and attribution deficits and FDG-PET cerebral metabolic patterns at the group and individual levels in a sample of sporadic bvFTD patients, exploring the cognitive-functional correlations. Seventeen probable mild bvFTD patients (10 male and 7 female; age 67.8±9.9) were administered standardized and validated version of social cognition tasks assessing the recognition of basic emotions and the attribution of emotions and intentions (i.e., Ekman 60-Faces test-Ek60F and Story-based Empathy task-SET). FDG-PET was analysed using an optimized voxel-based SPM method at the single-subject and group levels. Severe deficits of emotion recognition and processing characterized the bvFTD condition. At the group level, metabolic dysfunction in the right amygdala, temporal pole, and middle cingulate cortex was highly correlated to the emotional recognition and attribution performances. At the single-subject level, however, heterogeneous impairments of social cognition tasks emerged, and different metabolic patterns, involving limbic structures and prefrontal cortices, were also observed. The derangement of a right limbic network is associated with altered socio-emotional processing in bvFTD patients, but different hypometabolic FDG-PET patterns and heterogeneous performances on social tasks at an individual level exist. PMID:26513651
The coupling of emotion and cognition in the eye: introducing the pupil old/new effect.
Võ, Melissa L-H; Jacobs, Arthur M; Kuchinke, Lars; Hofmann, Markus; Conrad, Markus; Schacht, Annekathrin; Hutzler, Florian
2008-01-01
The study presented here investigated the effects of emotional valence on the memory for words by assessing both memory performance and pupillary responses during a recognition memory task. Participants had to make speeded judgments on whether a word presented in the test phase of the experiment had already been presented ("old") or not ("new"). An emotion-induced recognition bias was observed: Words with emotional content not only produced a higher amount of hits, but also elicited more false alarms than neutral words. Further, we found a distinct pupil old/new effect characterized as an elevated pupillary response to hits as opposed to correct rejections. Interestingly, this pupil old/new effect was clearly diminished for emotional words. We therefore argue that the pupil old/new effect is not only able to mirror memory retrieval processes, but also reflects modulation by an emotion-induced recognition bias.
Interplay between affect and arousal in recognition memory.
Greene, Ciara M; Bahri, Pooja; Soto, David
2010-07-23
Emotional states linked to arousal and mood are known to affect the efficiency of cognitive performance. However, the extent to which memory processes may be affected by arousal, mood or their interaction is poorly understood. Following a study phase of abstract shapes, we altered the emotional state of participants by means of exposure to music that varied in both mood and arousal dimensions, leading to four different emotional states: (i) positive mood-high arousal; (ii) positive mood-low arousal; (iii) negative mood-high arousal; (iv) negative mood-low arousal. Following the emotional induction, participants performed a memory recognition test. Critically, there was an interaction between mood and arousal on recognition performance. Memory was enhanced in the positive mood-high arousal and in the negative mood-low arousal states, relative to the other emotional conditions. Neither mood nor arousal alone but their interaction appears most critical to understanding the emotional enhancement of memory.
The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.
Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard
2016-10-01
The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.
Development of emotional facial recognition in late childhood and adolescence.
Thomas, Laura A; De Bellis, Michael D; Graham, Reiko; LaBar, Kevin S
2007-09-01
The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents and adults on a two-alternative forced-choice discrimination task using morphed faces that varied in emotional content. Actors appeared to pose expressions that changed incrementally along three progressions: neutral-to-fear, neutral-to-anger, and fear-to-anger. Across all three morph types, adults displayed more sensitivity to subtle changes in emotional expression than children and adolescents. Fear morphs and fear-to-anger blends showed a linear developmental trajectory, whereas anger morphs showed a quadratic trend, increasing sharply from adolescents to adults. The results provide evidence for late developmental changes in emotional expression recognition with some specificity in the time course for distinct emotions.
Test battery for measuring the perception and recognition of facial expressions of emotion
Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner
2014-01-01
Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528
Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia
ERIC Educational Resources Information Center
Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie
2012-01-01
Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…
ERIC Educational Resources Information Center
Bal, Elgiz; Harden, Emily; Lamb, Damon; Van Hecke, Amy Vaughan; Denver, John W.; Porges, Stephen W.
2010-01-01
Respiratory Sinus Arrhythmia (RSA), heart rate, and accuracy and latency of emotion recognition were evaluated in children with autism spectrum disorders (ASD) and typically developing children while viewing videos of faces slowly transitioning from a neutral expression to one of six basic emotions (e.g., anger, disgust, fear, happiness, sadness,…
Wang, Kun-Ching
2015-01-14
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.
Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.
Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C
2017-11-01
Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.
Buunk, Anne M; Groen, Rob J M; Veenstra, Wencke S; Metzemaekers, Jan D M; van der Hoeven, Johannes H; van Dijk, J Marc C; Spikman, Jacoba M
2016-11-01
The authors' aim was to investigate cognitive outcome in patients with aneurysmal and angiographically negative subarachnoid hemorrhage (aSAH and anSAH), by comparing them to healthy controls and to each other. Besides investigating cognitive functions as memory and attention, they focused on higher-order prefrontal functions, namely executive functioning (EF) and emotion recognition. Patients and healthy controls were assessed with tests measuring memory (15 Words Test, Digit Span), attention and processing speed (Trail Making Test A and B), EF (Zoo Map, Letter Fluency, Dysexecutive Questionnaire), and emotion recognition (Facial Expressions of Emotion Stimuli and Tests). Between-groups comparisons of test performances were made. Patients with aSAH scored significantly lower than healthy controls on measures of memory, processing speed, and attention, but anSAH patients did not. In the higher-order prefrontal functions (EF and emotion recognition), aSAH patients were clearly impaired when compared to healthy controls. However, anSAH patients did not perform significantly better than aSAH patients on the majority of the tests. In the subacute phase after SAH, cognitive functions, including the higher-order prefrontal functions EF and emotion recognition, were clearly impaired in aSAH patients. Patients with anSAH did not perform better than aSAH patients, which indicates that these functions may also be affected to some extent in anSAH patients. Considering the importance of these higher-order prefrontal functions for daily life functioning, and following the results of the present study, tests that measure emotion recognition and EF should be part of the standard neuropsychological assessment after SAH. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
EUReKA! A Conceptual Model of Emotion Understanding
Castro, Vanessa L.; Cheng, Yanhua; Halberstadt, Amy G.; Grühn, Daniel
2015-01-01
The field of emotion understanding is replete with measures, yet lacks an integrated conceptual organizing structure. To identify and organize skills associated with the recognition and knowledge of emotions, and to highlight the focus of emotion understanding as localized in the self, in specific others, and in generalized others, we introduce the conceptual framework of Emotion Understanding in Recognition and Knowledge Abilities (EUReKA). We then categorize fifty-six existing methods of emotion understanding within this framework to highlight current gaps and future opportunities in assessing emotion understanding across the lifespan. We hope the EUReKA model provides a systematic and integrated framework for conceptualizing and measuring emotion understanding for future research. PMID:27594904
Lima, César F; Garrett, Carolina; Castro, São Luís
2013-01-01
Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
Buhlmann, Ulrike; Winter, Anna; Kathmann, Norbert
2013-03-01
Body dysmorphic disorder (BDD) is characterized by perceived appearance-related defects, often tied to aspects of the face or head (e.g., acne). Deficits in decoding emotional expressions have been examined in several psychological disorders including BDD. Previous research indicates that BDD is associated with impaired facial emotion recognition, particularly in situations that involve the BDD sufferer him/herself. The purpose of this study was to further evaluate the ability to read other people's emotions among 31 individuals with BDD, and 31 mentally healthy controls. We applied the Reading the Mind in the Eyes task, in which participants are presented with a series of pairs of eyes, one at a time, and are asked to identify the emotion that describes the stimulus best. The groups did not differ with respect to decoding other people's emotions by looking into their eyes. Findings are discussed in light of previous research examining emotion recognition in BDD. Copyright © 2013. Published by Elsevier Ltd.
Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A.; Ravera, Roberto; Gallese, Vittorio
2017-01-01
One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual analysis of philogenetically old behavioral patterns like the facial expressions of emotions. PMID:28690565
Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A; Ravera, Roberto; Gallese, Vittorio
2017-01-01
One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims' recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims' performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual analysis of philogenetically old behavioral patterns like the facial expressions of emotions.
Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.
Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S
2007-01-01
People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.
Neuroticism and facial emotion recognition in healthy adults.
Andric, Sanja; Maric, Nadja P; Knezevic, Goran; Mihaljevic, Marina; Mirjanic, Tijana; Velthorst, Eva; van Os, Jim
2016-04-01
The aim of the present study was to examine whether healthy individuals with higher levels of neuroticism, a robust independent predictor of psychopathology, exhibit altered facial emotion recognition performance. Facial emotion recognition accuracy was investigated in 104 healthy adults using the Degraded Facial Affect Recognition Task (DFAR). Participants' degree of neuroticism was estimated using neuroticism scales extracted from the Eysenck Personality Questionnaire and the Revised NEO Personality Inventory. A significant negative correlation between the degree of neuroticism and the percentage of correct answers on DFAR was found only for happy facial expression (significant after applying Bonferroni correction). Altered sensitivity to the emotional context represents a useful and easy way to obtain cognitive phenotype that correlates strongly with inter-individual variations in neuroticism linked to stress vulnerability and subsequent psychopathology. Present findings could have implication in early intervention strategies and staging models in psychiatry. © 2015 Wiley Publishing Asia Pty Ltd.
Gender affects body language reading.
Sokolov, Arseny A; Krüger, Samuel; Enck, Paul; Krägeloh-Mann, Ingeborg; Pavlova, Marina A
2011-01-01
Body motion is a rich source of information for social cognition. However, gender effects in body language reading are largely unknown. Here we investigated whether, and, if so, how recognition of emotional expressions revealed by body motion is gender dependent. To this end, females and males were presented with point-light displays portraying knocking at a door performed with different emotional expressions. The findings show that gender affects accuracy rather than speed of body language reading. This effect, however, is modulated by emotional content of actions: males surpass in recognition accuracy of happy actions, whereas females tend to excel in recognition of hostile angry knocking. Advantage of women in recognition accuracy of neutral actions suggests that females are better tuned to the lack of emotional content in body actions. The study provides novel insights into understanding of gender effects in body language reading, and helps to shed light on gender vulnerability to neuropsychiatric and neurodevelopmental impairments in visual social cognition.
The voice conveys emotion in ten globalized cultures and one remote village in Bhutan.
Cordaro, Daniel T; Keltner, Dacher; Tshering, Sumjay; Wangchuk, Dorji; Flynn, Lisa M
2016-02-01
With data from 10 different globalized cultures and 1 remote, isolated village in Bhutan, we examined universals and cultural variations in the recognition of 16 nonverbal emotional vocalizations. College students in 10 nations (Study 1) and villagers in remote Bhutan (Study 2) were asked to match emotional vocalizations to 1-sentence stories of the same valence. Guided by previous conceptualizations of recognition accuracy, across both studies, 7 of the 16 vocal burst stimuli were found to have strong or very strong recognition in all 11 cultures, 6 vocal bursts were found to have moderate recognition, and 4 were not universally recognized. All vocal burst stimuli varied significantly in terms of the degree to which they were recognized across the 11 cultures. Our discussion focuses on the implications of these results for current debates concerning the emotion conveyed in the voice. (c) 2016 APA, all rights reserved).
Lee, Woo Kyeong; Kim, Yong Kyu
2013-09-01
Several studies have suggested the presence of a theory of mind (ToM) deficit in schizophrenic disorders. This study examined the relationship of emotion recognition, theory of mind, and ward behavior in patients with schizophrenia. Fifty-five patients with chronic schizophrenia completed measures of emotion recognition, ToM, intelligence, Positive and Negative Syndrome Scale (PANSS) and Nurse's Observation Scale for Inpatient Evaluation (NOSIE). Theory of mind sum score correlated significantly with IQ, emotion recognition, and ward behavior. Ward behavior was linked to the duration of the illness, and even more so to theory of mind deficits. Theory of mind contributed a significant proportion of the amount of variance to explain social behavior on the ward. Considering our study results, impaired theory of mind contributes significantly to the understanding of social competence in patients with schizophrenia. Copyright © 2012 Wiley Publishing Asia Pty Ltd.
CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset
Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738
Familial covariation of facial emotion recognition and IQ in schizophrenia.
Andric, Sanja; Maric, Nadja P; Mihaljevic, Marina; Mirjanic, Tijana; van Os, Jim
2016-12-30
Alterations in general intellectual ability and social cognition in schizophrenia are core features of the disorder, evident at the illness' onset and persistent throughout its course. However, previous studies examining cognitive alterations in siblings discordant for schizophrenia yielded inconsistent results. Present study aimed to investigate the nature of the association between facial emotion recognition and general IQ by applying genetically sensitive cross-trait cross-sibling design. Participants (total n=158; patients, unaffected siblings, controls) were assessed using the Benton Facial Recognition Test, the Degraded Facial Affect Recognition Task (DFAR) and the Wechsler Adult Intelligence Scale-III. Patients had lower IQ and altered facial emotion recognition in comparison to other groups. Healthy siblings and controls did not significantly differ in IQ and DFAR performance, but siblings exhibited intermediate angry facial expression recognition. Cross-trait within-subject analyses showed significant associations between overall DFAR performance and IQ in all participants. Within-trait cross-sibling analyses found significant associations between patients' and siblings' IQ and overall DFAR performance, suggesting their familial clustering. Finally, cross-trait cross-sibling analyses revealed familial covariation of facial emotion recognition and IQ in siblings discordant for schizophrenia, further indicating their familial etiology. Both traits are important phenotypes for genetic studies and potential early clinical markers of schizophrenia-spectrum disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Silver, Henry; Bilker, Warren B
2015-01-01
Social cognition is commonly assessed by identification of emotions in facial expressions. Presence of colour, a salient feature of stimuli, might influence emotional face perception. We administered 2 tests of facial emotion recognition, the Emotion Recognition Test (ER40) using colour pictures and the Penn Emotional Acuity Test using monochromatic pictures, to 37 young healthy, 39 old healthy and 37 schizophrenic men. Among young healthy individuals recognition of emotions was more accurate and faster in colour than in monochromatic pictures. Compared to the younger group, older healthy individuals revealed impairment in identification of sad expressions in colour but not monochromatic pictures. Schizophrenia patients showed greater impairment in colour than monochromatic pictures of neutral and sad expressions and overall total score compared to both healthy groups. Patients showed significant correlations between cognitive impairment and perception of emotion in colour but not monochromatic pictures. Colour enhances perception of general emotional clues and this contextual effect is impaired in healthy ageing and schizophrenia. The effects of colour need to be considered in interpreting and comparing studies of emotion perception. Coloured face stimuli may be more sensitive to emotion processing impairments but less selective for emotion-specific information than monochromatic stimuli. This may impact on their utility in early detection of impairments and investigations of underlying mechanisms.
Rieffe, Carolien; Wiefferink, Carin H
2017-03-01
The capacity for emotion recognition and understanding is crucial for daily social functioning. We examined to what extent this capacity is impaired in young children with a Language Impairment (LI). In typical development, children learn to recognize emotions in faces and situations through social experiences and social learning. Children with LI have less access to these experiences and are therefore expected to fall behind their peers without LI. In this study, 89 preschool children with LI and 202 children without LI (mean age 3 years and 10 months in both groups) were tested on three indices for facial emotion recognition (discrimination, identification, and attribution in emotion evoking situations). Parents reported on their children's emotion vocabulary and ability to talk about their own emotions. Preschoolers with and without LI performed similarly on the non-verbal task for emotion discrimination. Children with LI fell behind their peers without LI on both other tasks for emotion recognition that involved labelling the four basic emotions (happy, sad, angry, fear). The outcomes of these two tasks were also related to children's level of emotion language. These outcomes emphasize the importance of 'emotion talk' at the youngest age possible for children with LI. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N
2015-06-01
The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population without psychiatric diagnosis. The study also allowed the identification of cut-off scores; results below 2 standard deviations of the healthy control average (61.57%) pointed to a facial affect recognition deficit. The TREF appears to be a useful tool to identify facial affects recognition impairment in schizophrenia. Neuropsychologists, who have tried this task, have positive feedback. The TREF is easy to use (duration of about 15 minutes), easy to apply in subjects with attentional difficulties, and tests facial affects recognition at ecological intensity levels. These results have to be confirmed in the future with larger sample sizes, and in comparison with other tasks, evaluating the facial affects recognition processes. Copyright © 2014 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Nardelli, M; Greco, A; Valenza, G; Lanata, A; Bailon, R; Scilingo, E P
2017-07-01
This paper reports on a novel method for the analysis of Heart Rate Variability (HRV) through Lagged Poincaré Plot (LPP) theory. Specifically a hybrid method, LPP symb , including LPP quantifiers and related symbolic dynamics was proposed. LPP has been applied to investigate the autonomic response to pleasant and unpleasant pictures extracted from the International Affective Picture System (IAPS). IAPS pictures are standardized in terms of level of arousal, i.e. the intensity of the evoked emotion, and valence, i.e. the level of pleasantness/unpleasantness, according to the Circumplex model of Affects (CMA). Twenty-two healthy subjects were enrolled in the experiment, which comprised four sessions with increasing arousal level. Within each session valence increased from positive to negative. An ad-hoc pattern recognition algorithm using a Leave-One-Subject-Out (LOSO) procedure based on a Quadratic Discriminant Classifier (QDC) was implemented. Our pattern recognition system was able to classify pleasant and unpleasant sessions with an accuracy of 71.59%. Therefore, we can suggest the use of the LPP symb for emotion recognition.
Niedtfeld, Inga; Defiebre, Nadine; Regenbogen, Christina; Mier, Daniela; Fenske, Sabrina; Kirsch, Peter; Lis, Stefanie; Schmahl, Christian
2017-04-01
Previous research has revealed alterations and deficits in facial emotion recognition in patients with borderline personality disorder (BPD). During interpersonal communication in daily life, social signals such as speech content, variation in prosody, and facial expression need to be considered simultaneously. We hypothesized that deficits in higher level integration of social stimuli contribute to difficulties in emotion recognition in BPD, and heightened arousal might explain this effect. Thirty-one patients with BPD and thirty-one healthy controls were asked to identify emotions in short video clips, which were designed to represent different combinations of the three communication channels: facial expression, speech content, and prosody. Skin conductance was recorded as a measure of sympathetic arousal, while controlling for state dissociation. Patients with BPD showed lower mean accuracy scores than healthy control subjects in all conditions comprising emotional facial expressions. This was true for the condition with facial expression only, and for the combination of all three communication channels. Electrodermal responses were enhanced in BPD only in response to auditory stimuli. In line with the major body of facial emotion recognition studies, we conclude that deficits in the interpretation of facial expressions lead to the difficulties observed in multimodal emotion processing in BPD.
Sensory contribution to vocal emotion deficit in Parkinson's disease after subthalamic stimulation.
Péron, Julie; Cekic, Sezen; Haegelen, Claire; Sauleau, Paul; Patel, Sona; Drapier, Dominique; Vérin, Marc; Grandjean, Didier
2015-02-01
Subthalamic nucleus (STN) deep brain stimulation in Parkinson's disease induces modifications in the recognition of emotion from voices (or emotional prosody). Nevertheless, the underlying mechanisms are still only poorly understood, and the role of acoustic features in these deficits has yet to be elucidated. Our aim was to identify the influence of acoustic features on changes in emotional prosody recognition following STN stimulation in Parkinson's disease. To this end, we analysed the performances of patients on vocal emotion recognition in pre-versus post-operative groups, as well as of matched controls, entering the acoustic features of the stimuli into our statistical models. Analyses revealed that the post-operative biased ratings on the Fear scale when patients listened to happy stimuli were correlated with loudness, while the biased ratings on the Sadness scale when they listened to happiness were correlated with fundamental frequency (F0). Furthermore, disturbed ratings on the Happiness scale when the post-operative patients listened to sadness were found to be correlated with F0. These results suggest that inadequate use of acoustic features following subthalamic stimulation has a significant impact on emotional prosody recognition in patients with Parkinson's disease, affecting the extraction and integration of acoustic cues during emotion perception. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sensory contribution to vocal emotion deficit in Parkinson’s disease after subthalamic stimulation
Péron, Julie; Cekic, Sezen; Haegelen, Claire; Sauleau, Paul; Patel, Sona; Drapier, Dominique; Vérin, Marc; Grandjean, Didier
2016-01-01
Subthalamic nucleus (STN) deep brain stimulation in Parkinson’s disease induces modifications in the recognition of emotion from voices (or emotional prosody). Nevertheless, the underlying mechanisms are still only poorly understood, and the role of acoustic features in these deficits has yet to be elucidated. Our aim was to identify the influence of acoustic features on changes in emotional prosody recognition following STN stimulation in Parkinson’s disease. To this end, we analysed the performances of patients on vocal emotion recognition in pre-versus post-operative groups, as well as of matched controls, entering the acoustic features of the stimuli into our statistical models. Analyses revealed that the post-operative biased ratings on the Fear scale when patients listened to happy stimuli were correlated with loudness, while the biased ratings on the Sadness scale when they listened to happiness were correlated with fundamental frequency (F0). Furthermore, disturbed ratings on the Happiness scale when the post-operative patients listened to sadness were found to be correlated with F0. These results suggest that inadequate use of acoustic features following subthalamic stimulation has a significant impact on emotional prosody recognition in patients with Parkinson’s disease, affecting the extraction and integration of acoustic cues during emotion perception. PMID:25282055
Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu
2013-01-01
Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.
von Piekartz, H; Lüers, J; Daumeyer, H; Mohr, G
2017-10-01
The aim of this study is to investigate the effects of kinesiophobia on emotion recognition and left/right judgement. A total of 67 patients with chronic musculoskeletal pain were tested. In all, 24 patients achieved a score >37 on the Tampa Scale of Kinesiophobia and were included in the study. The ability to recognize basic emotions coded through facial expression was assessed using the Facially Expressed Emotion Labeling (FEEL) test. Left/right judgement was evaluated using a special Face-mirroring Assessment and Treatment program. The Toronto Alexithymia Scale-26 (TAS-26) was used to assess if the patients showed signs of alexithymia. The FEEL score of patients with kinesiophobia was significantly lower (p = 0.019). The recognition of the basic emotions fear (p = 0.026), anger (p = 0.027), and surprise (p = 0.014) showed significant differences in comparison to unaffected subjects. The basic emotion surprise was recognized more often by patients with kinesiophobia (p = 0.014). Only Scale 1 of the TAS-26 (identification problems of emotions) showed a significant difference between patients with kinesiophobia (p = 0.008) and healthy subjects. The results show that kinesiophobic patients have altered recognition of emotions, problems in left/right judgement, and show signs of alexithymia.
Zhu, Lianzhang; Chen, Leiming; Zhao, Dehai
2017-01-01
Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. PMID:28737705
Sharpe, Emma; Wallis, Deborah J; Ridout, Nathan
2016-06-30
This study aimed to: (i) determine if the attention bias towards angry faces reported in eating disorders generalises to a non-clinical sample varying in eating disorder-related symptoms; (ii) examine if the bias occurs during initial orientation or later strategic processing; and (iii) confirm previous findings of impaired facial emotion recognition in non-clinical disordered eating. Fifty-two females viewed a series of face-pairs (happy or angry paired with neutral) whilst their attentional deployment was continuously monitored using an eye-tracker. They subsequently identified the emotion portrayed in a separate series of faces. The highest (n=18) and lowest scorers (n=17) on the Eating Disorders Inventory (EDI) were compared on the attention and facial emotion recognition tasks. Those with relatively high scores exhibited impaired facial emotion recognition, confirming previous findings in similar non-clinical samples. They also displayed biased attention away from emotional faces during later strategic processing, which is consistent with previously observed impairments in clinical samples. These differences were related to drive-for-thinness. Although we found no evidence of a bias towards angry faces, it is plausible that the observed impairments in emotion recognition and avoidance of emotional faces could disrupt social functioning and act as a risk factor for the development of eating disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Riemer, Valentin; Frommel, Julian; Layher, Georg; Neumann, Heiko; Schrader, Claudia
2017-01-01
The importance of emotions experienced by learners during their interaction with multimedia learning systems, such as serious games, underscores the need to identify sources of information that allow the recognition of learners’ emotional experience without interrupting the learning process. Bodily expression is gaining in attention as one of these sources of information. However, to date, the question of how bodily expression can convey different emotions has largely been addressed in research relying on acted emotion displays. Following a more contextualized approach, the present study aims to identify features of bodily expression (i.e., posture and activity of the upper body and the head) that relate to genuine emotional experience during interaction with a serious game. In a multimethod approach, 70 undergraduates played a serious game relating to financial education while their bodily expression was captured using an off-the-shelf depth-image sensor (Microsoft Kinect). In addition, self-reports of experienced enjoyment, boredom, and frustration were collected repeatedly during gameplay, to address the dynamic changes in emotions occurring in educational tasks. Results showed that, firstly, the intensities of all emotions indeed changed significantly over the course of the game. Secondly, by using generalized estimating equations, distinct features of bodily expression could be identified as significant indicators for each emotion under investigation. A participant keeping their head more turned to the right was positively related to frustration being experienced, whereas keeping their head more turned to the left was positively related to enjoyment. Furthermore, having their upper body positioned more closely to the gaming screen was also positively related to frustration. Finally, increased activity of a participant’s head emerged as a significant indicator of boredom being experienced. These results confirm the value of bodily expression as an indicator of emotional experience in multimedia learning systems. Furthermore, the findings may guide developers of emotion recognition procedures by focusing on the identified features of bodily expression. PMID:28798717
The effect of age on memory for emotional faces.
Grady, Cheryl L; Hongwanishkul, Donaya; Keightley, Michelle; Lee, Wendy; Hasher, Lynn
2007-05-01
Prior studies of emotion suggest that young adults should have enhanced memory for negative faces and that this enhancement should be reduced in older adults. Several studies have not shown these effects but were conducted with procedures different from those used with other emotional stimuli. In this study, researchers examined age differences in recognition of faces with emotional or neutral expressions, using trial-unique stimuli, as is typically done with other types of emotional stimuli. They also assessed the influence of personality traits and mood on memory. Enhanced recognition for negative faces was found in young adults but not in older adults. Recognition of faces was not influenced by mood or personality traits in young adults, but lower levels of extraversion and better emotional sensitivity predicted better negative face memory in older adults. These results suggest that negative expressions enhance memory for faces in young adults, as negative valence enhances memory for words and scenes. This enhancement is absent in older adults, but memory for emotional faces is modulated in older adults by personality traits that are relevant to emotional processing. (c) 2007 APA, all rights reserved
Emotion Knowledge and Attentional Differences in Preschoolers Showing Context-Inappropriate Anger.
Locke, Robin L; Lang, Nichole J
2016-08-01
Some children show anger inappropriate for the situation based on the predominant incentives, which is called context-inappropriate anger. Children need to attend to and interpret situational incentives for appropriate emotional responses. We examined associations of context-inappropriate anger with emotion recognition and attention problems in 43 preschoolers (42% male; M age = 55.1 months, SD = 4.1). Parents rated context-inappropriate anger across situations. Teachers rated attention problems using the Child Behavior Checklist-Teacher Report Form. Emotion recognition was ability to recognize emotional faces using the Emotion Matching Test. Anger perception bias was indicated by anger to non-anger situations using an adapted Affect Knowledge Test. 28% of children showed context-inappropriate anger, which correlated with lower emotion recognition (β = -.28) and higher attention problems (β = .36). Higher attention problems correlated with more anger perception bias (β = .32). This cross-sectional, correlational study provides preliminary findings that children with context-inappropriate anger showed more attention problems, which suggests that both "problems" tend to covary and associate with deficits or biases in emotion knowledge. © The Author(s) 2016.