Science.gov

Sample records for expression facial expression

  1. Holistic facial expression classification

    NASA Astrophysics Data System (ADS)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  2. PCA facial expression recognition

    NASA Astrophysics Data System (ADS)

    El-Hori, Inas H.; El-Momen, Zahraa K.; Ganoun, Ali

    2013-12-01

    This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. The comparative study of Facial Expression Recognition (FER) techniques namely Principal Component's analysis (PCA) and PCA with Gabor filters (GF) is done. The objective of this research is to show that PCA with Gabor filters is superior to the first technique in terms of recognition rate. To test and evaluates their performance, experiments are performed using real database by both techniques. The universally accepted five principal emotions to be recognized are: Happy, Sad, Disgust and Angry along with Neutral. The recognition rates are obtained on all the facial expressions.

  3. Measuring facial expression of emotion.

    PubMed

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  4. Measuring facial expression of emotion

    PubMed Central

    Wolf, Karsten

    2015-01-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress. PMID:26869846

  5. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  6. Cortical control of facial expression.

    PubMed

    Müri, René M

    2016-06-01

    The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system. PMID:26418049

  7. Compound facial expressions of emotion.

    PubMed

    Du, Shichuan; Tao, Yong; Martinez, Aleix M

    2014-04-15

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.

  8. Compound facial expressions of emotion

    PubMed Central

    Du, Shichuan; Tao, Yong; Martinez, Aleix M.

    2014-01-01

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories—happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another. PMID:24706770

  9. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  10. Analysis of Facial Expression by Taste Stimulation

    NASA Astrophysics Data System (ADS)

    Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko

    In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.

  11. Mapping and Manipulating Facial Expression

    ERIC Educational Resources Information Center

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  12. Facial dynamics and emotional expressions in facial aging treatments.

    PubMed

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed.

  13. Neuroticism Delays Detection of Facial Expressions.

    PubMed

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions.

  14. Neuroticism Delays Detection of Facial Expressions

    PubMed Central

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions. PMID:27073904

  15. Facial expressions recognition with an emotion expressive robotic head

    NASA Astrophysics Data System (ADS)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  16. Facial Expressions, Emotions, and Sign Languages

    PubMed Central

    Elliott, Eeva A.; Jacobs, Arthur M.

    2013-01-01

    Facial expressions are used by humans to convey various types of meaning in various contexts. The range of meanings spans basic possibly innate socio-emotional concepts such as “surprise” to complex and culture specific concepts such as “carelessly.” The range of contexts in which humans use facial expressions spans responses to events in the environment to particular linguistic constructions within sign languages. In this mini review we summarize findings on the use and acquisition of facial expressions by signers and present a unified account of the range of facial expressions used by referring to three dimensions on which facial expressions vary: semantic, compositional, and iconic. PMID:23482994

  17. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  18. Mapping and manipulating facial expression.

    PubMed

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R; Brick, Timothy R; Cohn, Jeffrey F; Boker, Steven M

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial gestures in video sequences of people engaged in conversation. We are developing a system for use in psychological experiments, where the effects of manipulating individual components of nonverbal visual behavior during live face-to-face conversation can be studied. In particular, the techniques we describe operate in real-time at video frame-rate and the manipulation can be applied so both participants in a conversation are kept blind to the experimental conditions. PMID:19624037

  19. Facial expression recognition using thermal image.

    PubMed

    Jiang, Guotai; Song, Xuemin; Zheng, Fuhui; Wang, Peipei; Omer, Ashgan

    2005-01-01

    Facial expression recognition will be studied in this paper using mathematics morphology, through drawing and analyzing the whole geometry characteristics and some geometry characteristics of the interesting area of Infrared Thermal Imaging (IRTI). The results show that geometry characteristic in the interesting region of different expression are obviously different; Facial temperature changes almost with the expression at the same time. Studies have shown feasibility of facial expression recognition on the basis of IRTI. This method can be used to monitor the facial expression in real time, which can be used in auxiliary diagnosis and medical on disease.

  20. Man-machine collaboration using facial expressions

    NASA Astrophysics Data System (ADS)

    Dai, Ying; Katahera, S.; Cai, D.

    2002-09-01

    For realizing the flexible man-machine collaboration, understanding of facial expressions and gestures is not negligible. In our method, we proposed a hierarchical recognition approach, for the understanding of human emotions. According to this method, the facial AFs (action features) were firstly extracted and recognized by using histograms of optical flow. Then, based on the facial AFs, facial expressions were classified into two calsses, one of which presents the positive emotions, and the other of which does the negative ones. Accordingly, the facial expressions belonged to the positive class, or the ones belonged to the negative class, were classified into more complex emotions, which were revealed by the corresponding facial expressions. Finally, the system architecture how to coordinate in recognizing facil action features and facial expressions for man-machine collaboration was proposed.

  1. Recognizing Action Units for Facial Expression Analysis.

    PubMed

    Tian, Ying-Li; Kanade, Takeo; Cohn, Jeffrey F

    2001-02-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

  2. Social Use of Facial Expressions in Hylobatids.

    PubMed

    Scheider, Linda; Waller, Bridget M; Oña, Leonardo; Burrows, Anne M; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely 'responded to' by the partner's facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics.

  3. Social Use of Facial Expressions in Hylobatids

    PubMed Central

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  4. Cerebral regulation of facial expressions of pain.

    PubMed

    Kunz, Miriam; Chen, Jen-I; Lautenbacher, Stefan; Vachon-Presseau, Etienne; Rainville, Pierre

    2011-06-15

    Facial expression of affective states plays a key role in social interactions. Interestingly, however, individuals differ substantially in their level of expressiveness, ranging from high expressive to stoic individuals. Here, we investigate which brain mechanisms underlie the regulation of facial expressiveness to acute pain. Facial responses, pain ratings, and brain activity (BOLD-fMRI) evoked by noxious heat and warm (control) stimuli were recorded in 34 human volunteers with different degrees of facial expressiveness. Within-subject and between-subject variations in blood oxygenation level-dependent (BOLD) responses were examined specifically in relation to facial responses. Pain expression was inversely related to frontostriatal activity, consistent with a role in downregulating facial displays. More detailed analyses of the peak activity in medial prefrontal cortex revealed negative BOLD responses to thermal stimuli, an effect generally associated with the default mode network. Given that this negative BOLD response was weaker in low expressive individuals during pain, it could reflect stronger engagement in, or reduced disengagement from, self-reflective processes in stoic individuals. The occurrence of facial expressions during pain was coupled with stronger primary motor activity in the face area and-interestingly-in areas involved in pain processing. In conclusion, these results indicate that spontaneous pain expression reflects activity within nociceptive pathways while stoicism involves the active suppression of expression, a manifestation of learned display rules governing emotional communication and possibly related to an increased self-reflective or introspective focus. PMID:21677157

  5. Simultaneous facial feature tracking and facial expression recognition.

    PubMed

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.

  6. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  7. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  8. Robust facial expression recognition via compressive sensing.

    PubMed

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  9. Classifying Chimpanzee Facial Expressions Using Muscle Action

    PubMed Central

    Parr, Lisa A.; Waller, Bridget M.; Vick, Sarah J.; Bard, Kim A.

    2010-01-01

    The Chimpanzee Facial Action Coding System (ChimpFACS) is an objective, standardized observational tool for measuring facial movement in chimpanzees based on the well-known human Facial Action Coding System (FACS; P. Ekman & W. V. Friesen, 1978). This tool enables direct structural comparisons of facial expressions between humans and chimpanzees in terms of their common underlying musculature. Here the authors provide data on the first application of the ChimpFACS to validate existing categories of chimpanzee facial expressions using discriminant functions analyses. The ChimpFACS validated most existing expression categories (6 of 9) and, where the predicted group memberships were poor, the authors discuss potential problems with ChimpFACS and/or existing categorizations. The authors also report the prototypical movement configurations associated with these 6 expression categories. For all expressions, unique combinations of muscle movements were identified, and these are illustrated as peak intensity prototypical expression configurations. Finally, the authors suggest a potential homology between these prototypical chimpanzee expressions and human expressions based on structural similarities. These results contribute to our understanding of the evolution of emotional communication by suggesting several structural homologies between the facial expressions of chimpanzees and humans and facilitating future research. PMID:17352572

  10. Facial Expressivity in Infants of Depressed Mothers.

    ERIC Educational Resources Information Center

    Pickens, Jeffrey; Field, Tiffany

    1993-01-01

    Facial expressions were examined in 84 3-month-old infants of mothers classified as depressed, nondepressed, or low scoring on the Beck Depression Inventory. Infants of both depressed and low-scoring mothers showed significantly more sadness and anger expressions and fewer interest expressions than infants of nondepressed mothers. (Author/MDM)

  11. Facial Expression Biometrics Using Statistical Shape Models

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Matuszewski, Bogdan J.; Shark, Lik-Kwan; Ait-Boudaoud, Djamel

    2009-12-01

    This paper describes a novel method for representing different facial expressions based on the shape space vector (SSV) of the statistical shape model (SSM) built from 3D facial data. The method relies only on the 3D shape, with texture information not being used in any part of the algorithm, that makes it inherently invariant to changes in the background, illumination, and to some extent viewing angle variations. To evaluate the proposed method, two comprehensive 3D facial data sets have been used for the testing. The experimental results show that the SSV not only controls the shape variations but also captures the expressive characteristic of the faces and can be used as a significant feature for facial expression recognition. Finally the paper suggests improvements of the SSV discriminatory characteristics by using 3D facial sequences rather than 3D stills.

  12. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  13. Facial and vocal expressions of emotion.

    PubMed

    Russell, James A; Bachorowski, Jo-Anne; Fernandez-Dols, Jose-Miguel

    2003-01-01

    A flurry of theoretical and empirical work concerning the production of and response to facial and vocal expressions has occurred in the past decade. That emotional expressions express emotions is a tautology but may not be a fact. Debates have centered on universality, the nature of emotion, and the link between emotions and expressions. Modern evolutionary theory is informing more models, emphasizing that expressions are directed at a receiver, that the interests of sender and receiver can conflict, that there are many determinants of sending an expression in addition to emotion, that expressions influence the receiver in a variety of ways, and that the receiver's response is more than simply decoding a message.

  14. Biased Facial Expression Interpretation in Shy Children

    ERIC Educational Resources Information Center

    Kokin, Jessica; Younger, Alastair; Gosselin, Pierre; Vaillancourt, Tracy

    2016-01-01

    The relationship between shyness and the interpretations of the facial expressions of others was examined in a sample of 123 children aged 12 to 14?years. Participants viewed faces displaying happiness, fear, anger, disgust, sadness, surprise, as well as a neutral expression, presented on a computer screen. The children identified each expression…

  15. 3D facial expression modeling for recognition

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.; Dass, Sarat C.

    2005-03-01

    Current two-dimensional image based face recognition systems encounter difficulties with large variations in facial appearance due to the pose, illumination and expression changes. Utilizing 3D information of human faces is promising for handling the pose and lighting variations. While the 3D shape of a face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement and evolution, such as expressions and aging effect. We propose a facial surface matching framework to match multiview facial scans to a 3D face model, where the (non-rigid) expression deformation is explicitly modeled for each subject, resulting in a person-specific deformation model. The thin plate spline (TPS) is applied to model the deformation based on the facial landmarks. The deformation is applied to the 3D neutral expression face model to synthesize the corresponding expression. Both the neutral and the synthesized 3D surface models are used to match a test scan. The surface registration and matching between a test scan and a 3D model are achieved by a modified Iterative Closest Point (ICP) algorithm. Preliminary experimental results demonstrate that the proposed expression modeling and recognition-by-synthesis schemes improve the 3D matching accuracy.

  16. The identification of unfolding facial expressions.

    PubMed

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.

  17. Emotional attention capture by facial expressions

    PubMed Central

    Sawada, Reiko; Sato, Wataru

    2015-01-01

    Previous studies have shown that emotional facial expressions capture visual attention. However, it has been unclear whether attentional modulation is attributable to their emotional significance or to their visual features. We investigated this issue using a spatial cueing paradigm in which non-predictive cues were peripherally presented before the target was presented in either the same (valid trial) or the opposite (invalid trial) location. The target was an open dot and the cues were photographs of normal emotional facial expressions of anger and happiness, their anti-expressions and neutral expressions. Anti-expressions contained the amount of visual changes equivalent to normal emotional expressions compared with neutral expressions, but they were usually perceived as emotionally neutral. The participants were asked to localize the target as soon as possible. After the cueing task, they evaluated their subjective emotional experiences to the cue stimuli. Compared with anti-expressions, the normal emotional expressions decreased and increased the reaction times (RTs) in the valid and invalid trials, respectively. Shorter RTs in the valid trials and longer RTs in the invalid trials were related to higher subjective arousal ratings. These results suggest that emotional facial expressions accelerate attentional engagement and prolong attentional disengagement due to their emotional significance. PMID:26365083

  18. Stereoscopy Amplifies Emotions Elicited by Facial Expressions

    PubMed Central

    Kätsyri, Jari; Häkkinen, Jukka

    2015-01-01

    Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants’ gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15–65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions. PMID:27551358

  19. Stereoscopy Amplifies Emotions Elicited by Facial Expressions.

    PubMed

    Hakala, Jussi; Kätsyri, Jari; Häkkinen, Jukka

    2015-12-01

    Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants' gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15-65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions. PMID:27551358

  20. Stereoscopy Amplifies Emotions Elicited by Facial Expressions.

    PubMed

    Hakala, Jussi; Kätsyri, Jari; Häkkinen, Jukka

    2015-12-01

    Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants' gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15-65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions.

  1. Manifold based methods in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Xie, Kun

    2013-07-01

    This paper describes a novel method for facial expression recognition based on non-linear manifold techniques. The graph-based algorithms are designed to treat structure in data, and regularize accordingly. This same goal is shared by several other algorithms, from linear method principal components analysis (PCA) to modern variants such as Laplacian eigenmaps. In this paper we focus on manifold learning for dimensionality reduction and clustering using Laplacian eigenmaps for facial expression recognition. We evaluate the algorithm by using all the pixels and selected features respectively and compare the performance of the proposed non-linear manifold method with the previous linear manifold approach, and the non linear method produces higher recognition rate than the facial expression representation using linear methods.

  2. Sex Differences in the Rapid Detection of Emotional Facial Expressions

    PubMed Central

    Sawada, Reiko; Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2014-01-01

    Background Previous studies have shown that females and males differ in the processing of emotional facial expressions including the recognition of emotion, and that emotional facial expressions are detected more rapidly than are neutral expressions. However, whether the sexes differ in the rapid detection of emotional facial expressions remains unclear. Methodology/Principal Findings We measured reaction times (RTs) during a visual search task in which 44 females and 46 males detected normal facial expressions of anger and happiness or their anti-expressions within crowds of neutral expressions. Anti-expressions expressed neutral emotions with visual changes quantitatively comparable to normal expressions. We also obtained subjective emotional ratings in response to the facial expression stimuli. RT results showed that both females and males detected normal expressions more rapidly than anti-expressions and normal-angry expressions more rapidly than normal-happy expressions. However, females and males showed different patterns in their subjective ratings in response to the facial expressions. Furthermore, sex differences were found in the relationships between subjective ratings and RTs. High arousal was more strongly associated with rapid detection of facial expressions in females, whereas negatively valenced feelings were more clearly associated with the rapid detection of facial expressions in males. Conclusion Our data suggest that females and males differ in their subjective emotional reactions to facial expressions and in the emotional processes that modulate the detection of facial expressions. PMID:24728084

  3. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  4. Mapping the development of facial expression recognition.

    PubMed

    Rodger, Helen; Vizioli, Luca; Ouyang, Xinyi; Caldara, Roberto

    2015-11-01

    Reading the non-verbal cues from faces to infer the emotional states of others is central to our daily social interactions from very early in life. Despite the relatively well-documented ontogeny of facial expression recognition in infancy, our understanding of the development of this critical social skill throughout childhood into adulthood remains limited. To this end, using a psychophysical approach we implemented the QUEST threshold-seeking algorithm to parametrically manipulate the quantity of signals available in faces normalized for contrast and luminance displaying the six emotional expressions, plus neutral. We thus determined observers' perceptual thresholds for effective discrimination of each emotional expression from 5 years of age up to adulthood. Consistent with previous studies, happiness was most easily recognized with minimum signals (35% on average), whereas fear required the maximum signals (97% on average) across groups. Overall, recognition improved with age for all expressions except happiness and fear, for which all age groups including the youngest remained within the adult range. Uniquely, our findings characterize the recognition trajectories of the six basic emotions into three distinct groupings: expressions that show a steep improvement with age - disgust, neutral, and anger; expressions that show a more gradual improvement with age - sadness, surprise; and those that remain stable from early childhood - happiness and fear, indicating that the coding for these expressions is already mature by 5 years of age. Altogether, our data provide for the first time a fine-grained mapping of the development of facial expression recognition. This approach significantly increases our understanding of the decoding of emotions across development and offers a novel tool to measure impairments for specific facial expressions in developmental clinical populations.

  5. Mapping the development of facial expression recognition.

    PubMed

    Rodger, Helen; Vizioli, Luca; Ouyang, Xinyi; Caldara, Roberto

    2015-11-01

    Reading the non-verbal cues from faces to infer the emotional states of others is central to our daily social interactions from very early in life. Despite the relatively well-documented ontogeny of facial expression recognition in infancy, our understanding of the development of this critical social skill throughout childhood into adulthood remains limited. To this end, using a psychophysical approach we implemented the QUEST threshold-seeking algorithm to parametrically manipulate the quantity of signals available in faces normalized for contrast and luminance displaying the six emotional expressions, plus neutral. We thus determined observers' perceptual thresholds for effective discrimination of each emotional expression from 5 years of age up to adulthood. Consistent with previous studies, happiness was most easily recognized with minimum signals (35% on average), whereas fear required the maximum signals (97% on average) across groups. Overall, recognition improved with age for all expressions except happiness and fear, for which all age groups including the youngest remained within the adult range. Uniquely, our findings characterize the recognition trajectories of the six basic emotions into three distinct groupings: expressions that show a steep improvement with age - disgust, neutral, and anger; expressions that show a more gradual improvement with age - sadness, surprise; and those that remain stable from early childhood - happiness and fear, indicating that the coding for these expressions is already mature by 5 years of age. Altogether, our data provide for the first time a fine-grained mapping of the development of facial expression recognition. This approach significantly increases our understanding of the decoding of emotions across development and offers a novel tool to measure impairments for specific facial expressions in developmental clinical populations. PMID:25704672

  6. Categorical Perception of Affective and Linguistic Facial Expressions

    ERIC Educational Resources Information Center

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  7. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  8. Automatic recognition of emotions from facial expressions

    NASA Astrophysics Data System (ADS)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  9. Categorical perception of affective and linguistic facial expressions.

    PubMed

    McCullough, Stephen; Emmorey, Karen

    2009-02-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers' response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience. PMID:19111287

  10. Quantifying facial expression recognition across viewing conditions.

    PubMed

    Goren, Deborah; Wilson, Hugh R

    2006-04-01

    Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other. PMID:16364393

  11. Combining appearance and geometric features for facial expression recognition

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Liu, Honghai

    2015-03-01

    This paper introduces a method for facial expression recognition combining appearance and geometric facial features. The proposed framework consistently combines multiple facial representations at both global and local levels. First, covariance descriptors are computed to represent regional features combining various feature information with a low dimensionality. Then geometric features are detected to provide a general facial movement description of the facial expression. These appearance and geometric features are combined to form a vector representation of the facial expression. The proposed method is tested on the CK+ database and shows encouraging performance.

  12. Adults' responsiveness to children's facial expressions.

    PubMed

    Aradhye, Chinmay; Vonk, Jennifer; Arida, Danielle

    2015-07-01

    We investigated the effect of young children's (hereafter children's) facial expressions on adult responsiveness. In Study 1, 131 undergraduate students from a midsized university in the midwestern United States rated children's images and videos with smiling, crying, or neutral expressions on cuteness, likelihood to adopt, and participants' experienced distress. Looking times at images and videos along with perception of cuteness, likelihood to adopt, and experienced distress using 10-point Likert scales were measured. Videos of smiling children were rated as cuter and more likely to be adopted and were viewed for longer times compared with videos of crying children, which evoked more distress. In Study 2, we recorded responses from 101 of the same participants in an online survey measuring gender role identity, empathy, and perspective taking. Higher levels of femininity (as measured by Bem's Sex Role Inventory) predicted higher "likely to adopt" ratings for crying images. These findings indicate that adult perception of children and motivation to nurture are affected by both children's facial expressions and adult characteristics and build on existing literature to demonstrate that children may use expressions to manipulate the motivations of even non-kin adults to direct attention toward and perhaps nurture young children. PMID:25838165

  13. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  14. Dynamic facial expressions are processed holistically, but not more holistically than static facial expressions.

    PubMed

    Tobin, Alanna; Favelle, Simone; Palermo, Romina

    2016-09-01

    There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information. PMID:26208146

  15. Dynamic facial expressions are processed holistically, but not more holistically than static facial expressions.

    PubMed

    Tobin, Alanna; Favelle, Simone; Palermo, Romina

    2016-09-01

    There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information.

  16. Effects of facial expression on working memory.

    PubMed

    Stiernströmer, Emelie S; Wolgast, Martin; Johansson, Mikael

    2016-08-01

    In long-term memory (LTM) emotional content may both enhance and impair memory, however, disagreement remains whether emotional content exerts different effects on the ability to maintain and manipulate information over short intervals. Using a working-memory (WM) recognition task requiring the monitoring of faces displaying facial expressions of emotion, participants judged each face as identical (target) or not (non-target) to that presented 2 trials back (2-back). Negative expression was better and faster recognised, illustrated by higher target discriminability and target detection. Positive and negative expressions also induced a more liberal detection bias compared with neutral. Taking the preceding item into account, additional accuracy impairment (negative preceding negative target) and enhancement effects (negative or positive preceding neutral target) appeared. This illustrates a differential modulation of WM based on the affective tone of the target (mirroring LTM enhancement- and recognition bias effects), and of the preceding item (enhanced and impaired target detection). PMID:26238683

  17. Suitable models for face geometry normalization in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  18. A comparison of facial expression properties in five hylobatid species.

    PubMed

    Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget

    2014-07-01

    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. PMID:24395677

  19. Static posed and evoked facial expressions of emotions in schizophrenia

    PubMed Central

    Kohler, Christian G.; Martin, Elizabeth A.; Stolar, Neal; Barrett, Fred S.; Verma, Ragini; Brensinger, Colleen; Bilker, Warren; Gur, Raquel E.; Gur, Ruben C.

    2010-01-01

    Objective Impaired facial expressions of emotions have been described as characteristic symptoms of schizophrenia. Differences regarding individual facial muscle changes associated with specific emotions in posed and evoked expressions remain unclear. This study examined static facial expressions of emotions for evidence of flattened and inappropriate affect in persons with stable schizophrenia. Methods 12 persons with stable schizophrenia and matched healthy controls underwent a standardized procedure for posed and evoked facial expressions of five universal emotions, including happy, sad, anger, fear, and disgust expressions, at three intensity levels. Subjects completed self-ratings of their emotion experience. Certified raters coded images of facial expressions for presence of action units (AUs) according to the Facial Action Coding System. Logistic regression analyses were used to examine differences in the presence of AUs and emotion experience ratings by diagnosis, condition and intensity of expression. Results Patient and control groups experienced similar intensities of emotions, however, the difference between posed and evoked emotions was less pronounced in patients. Differences in expression of frequent and infrequent AUs support clinical observations of flattened and inappropriate affect in schizophrenia. Specific differences involve the Duchenne smile for happy expressions and decreased furrowed brows in all negative emotion expressions in schizophrenia. Conclusion While patterns of facial expressions were similar between groups, general and emotion specific differences support the concept of impaired facial expressions in schizophrenia. Expression of emotions in schizophrenia could not be explained by impaired experience. Future directions may include automated measurement, remediation of expressions and early detection of schizophrenia. PMID:18789845

  20. Facial expression recognition in Williams syndrome.

    PubMed

    Gagliardi, Chiara; Frigerio, Elisa; Burt, D Michael; Cazzaniga, Ilaria; Perrett, David I; Borgatti, Renato

    2003-01-01

    Individuals with Williams syndrome (WS) excel in face recognition and show both a remarkable concern for social stimuli and a linguistic capacity for, in particular, emotionally referenced language. The animated full facial expression comprehension test (AFFECT), a new test of emotional expression perception, was used to compare participants with WS with both chronological and mental age-matched controls. It was found that expression recognition in WS was worse than that of chronologically age-matched controls but indistinguishable from that of mental age controls. Different processing strategies are thought to underlie the similar performance of individuals with WS and mental age controls. The expression recognition performance of individuals with WS did not correlate with age, but was instead found to correlate with IQ. This is compared to earlier findings, replicated here, that face recognition performance on the Benton test correlates with age and not IQ. The results of the Benton test have been explained in terms of individuals with WS being good at face recognition; since a piecemeal strategy can be used, this strategy is improved with practice which would explain the correlation with age. We propose that poor expression recognition of the individuals with WS is due to a lack of configural ability since changes in the configuration of the face are an important part of expressions. Furthermore, these reduced configural abilities may be due to abnormal neuronal development and are thus fixed from an early age. PMID:12591030

  1. Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study

    PubMed Central

    Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003

  2. Facial expression recognition using constructive neural networks

    NASA Astrophysics Data System (ADS)

    Ma, Liying; Khorasani, Khashayar

    2001-08-01

    The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and

  3. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  4. From facial expressions to bodily gestures

    PubMed Central

    2016-01-01

    This article aims to determine to what extent photographic practices in psychology, psychiatry and physiology contributed to the definition of the external bodily signs of passions and emotions in the second half of the 19th century in France. Bridging the gap between recent research in the history of emotions and photographic history, the following analyses focus on the photographic production of scientists and photographers who made significant contributions to the study of expressions and gestures, namely Duchenne de Boulogne, Charles Darwin, Paul Richer and Albert Londe. This article argues that photography became a key technology in their works due to the adequateness of the exposure time of different cameras to the duration of the bodily manifestations to be recorded, and that these uses constituted facial expressions and bodily gestures as particular objects for the scientific study. PMID:26900264

  5. Shadows Alter Facial Expressions of Noh Masks

    PubMed Central

    Kawai, Nobuyuki; Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo

    2013-01-01

    Background A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers’ recognition of the emotional expressions. Methodology/Principal Findings In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. Conclusions/Significance Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa’s smile. They also agree with the aesthetic principle of Japanese traditional art “yugen (profound grace and subtlety)”, which highly appreciates subtle emotional expressions in the darkness. PMID:23940748

  6. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    NASA Astrophysics Data System (ADS)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  7. Children's Understanding of Facial Expressions Used during Conflict Encounters.

    ERIC Educational Resources Information Center

    Camras, Linda A.

    1980-01-01

    Investigated children's understanding of facial expressions such as anger, sadness, and disgust. Further study explored children's capacity to associate components of emotional expressions with the emotions to which they are related. (Author/RH)

  8. Facial Expression Recognition in Nonvisual Imagery

    NASA Astrophysics Data System (ADS)

    Olague, Gustavo; Hammoud, Riad; Trujillo, Leonardo; Hernández, Benjamín; Romero, Eva

    This chapter presents two novel approaches that allow computer vision applications to perform human facial expression recognition (FER). From a prob lem standpoint, we focus on FER beyond the human visual spectrum, in long-wave infrared imagery, thus allowing us to offer illumination-independent solutions to this important human-computer interaction problem. From a methodological stand point, we introduce two different feature extraction techniques: a principal com ponent analysis-based approach with automatic feature selection and one based on texture information selected by an evolutionary algorithm. In the former, facial fea tures are selected based on interest point clusters, and classification is carried out us ing eigenfeature information; in the latter, an evolutionary-based learning algorithm searches for optimal regions of interest and texture features based on classification accuracy. Both of these approaches use a support vector machine-committee for classification. Results show effective performance for both techniques, from which we can conclude that thermal imagery contains worthwhile information for the FER problem beyond the human visual spectrum.

  9. Dielectric elastomer actuators for facial expression

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhe; Zhu, Jian

    2016-04-01

    Dielectric elastomer actuators have the advantage of mimicking the salient feature of life: movements in response to stimuli. In this paper we explore application of dielectric elastomer actuators to artificial muscles. These artificial muscles can mimic natural masseter to control jaw movements, which are key components in facial expressions especially during talking and singing activities. This paper investigates optimal design of the dielectric elastomer actuator. It is found that the actuator with embedded plastic fibers can avert electromechanical instability and can greatly improve its actuation. Two actuators are then installed in a robotic skull to drive jaw movements, mimicking the masseters in a human jaw. Experiments show that the maximum vertical displacement of the robotic jaw, driven by artificial muscles, is comparable to that of the natural human jaw during speech activities. Theoretical simulations are conducted to analyze the performance of the actuator, which is quantitatively consistent with the experimental observations.

  10. Four not six: Revealing culturally common facial expressions of emotion.

    PubMed

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record PMID:27077757

  11. Four not six: Revealing culturally common facial expressions of emotion.

    PubMed

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record

  12. Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation.

    PubMed

    Guo, Yimo; Zhao, Guoying; Pietikainen, Matti

    2016-05-01

    In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison. PMID:26955032

  13. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  14. Enhanced subliminal emotional responses to dynamic facial expressions.

    PubMed

    Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi

    2014-01-01

    Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1) and 30 (Experiment 2) ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing. PMID:25250001

  15. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  16. Enhanced subliminal emotional responses to dynamic facial expressions

    PubMed Central

    Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi

    2014-01-01

    Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1) and 30 (Experiment 2) ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing. PMID:25250001

  17. The not face: A grammaticalization of facial expressions of emotion.

    PubMed

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.

  18. The Not Face: A grammaticalization of facial expressions of emotion

    PubMed Central

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  19. Electrophysiological correlates of the efficient detection of emotional facial expressions.

    PubMed

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Toichi, Motomi

    2014-04-29

    Behavioral studies have shown that emotional facial expressions are detected more rapidly and accurately than are neutral expressions. However, the neural mechanism underlying this efficient detection has remained unclear. To investigate this mechanism, we measured event-related potentials (ERPs) during a visual search task in which participants detected the normal emotional facial expressions of anger and happiness or their control stimuli, termed "anti-expressions," within crowds of neutral expressions. The anti-expressions, which were created using a morphing technique that produced changes equivalent to those in the normal emotional facial expressions compared with the neutral facial expressions, were most frequently recognized as emotionally neutral. Behaviorally, normal expressions were detected faster and more accurately and were rated as more emotionally arousing than were the anti-expressions. Regarding ERPs, the normal expressions elicited larger early posterior negativity (EPN) at 200-400ms compared with anti-expressions. Furthermore, larger EPN was related to faster and more accurate detection and higher emotional arousal. These data suggest that the efficient detection of emotional facial expressions is implemented via enhanced activation of the posterior visual cortices at 200-400ms based on their emotional significance. PMID:24594020

  20. Meta-Analysis of the First Facial Expression Recognition Challenge.

    PubMed

    Valstar, M F; Mehu, M; Bihan Jiang; Pantic, M; Scherer, K

    2012-08-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.

  1. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    ERIC Educational Resources Information Center

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  2. Viewing distance matter to perceived intensity of facial expressions

    PubMed Central

    Gerhardsson, Andreas; Högman, Lennart; Fischer, Håkan

    2015-01-01

    In our daily perception of facial expressions, we depend on an ability to generalize across the varied distances at which they may appear. This is important to how we interpret the quality and the intensity of the expression. Previous research has not investigated whether this so called perceptual constancy also applies to the experienced intensity of facial expressions. Using a psychophysical measure (Borg CR100 scale) the present study aimed to further investigate perceptual constancy of happy and angry facial expressions at varied sizes, which is a proxy for varying viewing distances. Seventy-one (42 females) participants rated the intensity and valence of facial expressions varying in distance and intensity. The results demonstrated that the perceived intensity (PI) of the emotional facial expression was dependent on the distance of the face and the person perceiving it. An interaction effect was noted, indicating that close-up faces are perceived as more intense than faces at a distance and that this effect is stronger the more intense the facial expression truly is. The present study raises considerations regarding constancy of the PI of happy and angry facial expressions at varied distances. PMID:26191035

  3. Viewing distance matter to perceived intensity of facial expressions.

    PubMed

    Gerhardsson, Andreas; Högman, Lennart; Fischer, Håkan

    2015-01-01

    In our daily perception of facial expressions, we depend on an ability to generalize across the varied distances at which they may appear. This is important to how we interpret the quality and the intensity of the expression. Previous research has not investigated whether this so called perceptual constancy also applies to the experienced intensity of facial expressions. Using a psychophysical measure (Borg CR100 scale) the present study aimed to further investigate perceptual constancy of happy and angry facial expressions at varied sizes, which is a proxy for varying viewing distances. Seventy-one (42 females) participants rated the intensity and valence of facial expressions varying in distance and intensity. The results demonstrated that the perceived intensity (PI) of the emotional facial expression was dependent on the distance of the face and the person perceiving it. An interaction effect was noted, indicating that close-up faces are perceived as more intense than faces at a distance and that this effect is stronger the more intense the facial expression truly is. The present study raises considerations regarding constancy of the PI of happy and angry facial expressions at varied distances.

  4. Parameterized Facial Expression Synthesis Based on MPEG-4

    NASA Astrophysics Data System (ADS)

    Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos

    2002-12-01

    In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

  5. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  6. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  7. How Facial Expressions of Emotion Affect Distance Perception

    PubMed Central

    Kim, Nam-Gyoon; Son, Heejung

    2015-01-01

    Facial expressions of emotion are thought to convey expressers’ behavioral intentions, thus priming observers’ approach and avoidance tendencies appropriately. The present study examined whether detecting expressions of behavioral intent influences perceivers’ estimation of the expresser’s distance from them. Eighteen undergraduates (nine male and nine female) participated in the study. Six facial expressions were chosen on the basis of degree of threat—anger, hate (threatening expressions), shame, surprise (neutral expressions), pleasure, and joy (safe expressions). Each facial expression was presented on a tablet PC held by an assistant covered by a black drape who stood 1, 2, or 3 m away from participants. Participants performed a visual matching task to report the perceived distance. Results showed that facial expression influenced distance estimation, with faces exhibiting threatening or safe expressions judged closer than those showing neutral expressions. Females’ judgments were more likely to be influenced; but these influences largely disappeared beyond the 2 m distance. These results suggest that facial expressions of emotion (particularly threatening or safe emotions) influence others’ (especially females’) distance estimations but only within close proximity. PMID:26635708

  8. Violent media consumption and the recognition of dynamic facial expressions.

    PubMed

    Kirsh, Steven J; Mounts, Jeffrey R W; Olczak, Paul V

    2006-05-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph. Results indicated that, independent of trait aggressiveness, participants high in violent media consumption responded slower to depictions of happiness and faster to depictions of anger than participants low in violent media consumption. Implications of these findings are discussed with respect to current models of aggressive behavior.

  9. Macaques can predict social outcomes from facial expressions.

    PubMed

    Waller, Bridget M; Whitehouse, Jamie; Micheletta, Jérôme

    2016-09-01

    There is widespread acceptance that facial expressions are useful in social interactions, but empirical demonstration of their adaptive function has remained elusive. Here, we investigated whether macaques can use the facial expressions of others to predict the future outcomes of social interaction. Crested macaques (Macaca nigra) were shown an approach between two unknown individuals on a touchscreen and were required to choose between one of two potential social outcomes. The facial expressions of the actors were manipulated in the last frame of the video. One subject reached the experimental stage and accurately predicted different social outcomes depending on which facial expressions the actors displayed. The bared-teeth display (homologue of the human smile) was most strongly associated with predicted friendly outcomes. Contrary to our predictions, screams and threat faces were not associated more with conflict outcomes. Overall, therefore, the presence of any facial expression (compared to neutral) caused the subject to choose friendly outcomes more than negative outcomes. Facial expression in general, therefore, indicated a reduced likelihood of social conflict. The findings dispute traditional theories that view expressions only as indicators of present emotion and instead suggest that expressions form part of complex social interactions where individuals think beyond the present.

  10. Macaques can predict social outcomes from facial expressions.

    PubMed

    Waller, Bridget M; Whitehouse, Jamie; Micheletta, Jérôme

    2016-09-01

    There is widespread acceptance that facial expressions are useful in social interactions, but empirical demonstration of their adaptive function has remained elusive. Here, we investigated whether macaques can use the facial expressions of others to predict the future outcomes of social interaction. Crested macaques (Macaca nigra) were shown an approach between two unknown individuals on a touchscreen and were required to choose between one of two potential social outcomes. The facial expressions of the actors were manipulated in the last frame of the video. One subject reached the experimental stage and accurately predicted different social outcomes depending on which facial expressions the actors displayed. The bared-teeth display (homologue of the human smile) was most strongly associated with predicted friendly outcomes. Contrary to our predictions, screams and threat faces were not associated more with conflict outcomes. Overall, therefore, the presence of any facial expression (compared to neutral) caused the subject to choose friendly outcomes more than negative outcomes. Facial expression in general, therefore, indicated a reduced likelihood of social conflict. The findings dispute traditional theories that view expressions only as indicators of present emotion and instead suggest that expressions form part of complex social interactions where individuals think beyond the present. PMID:27155662

  11. Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.

    PubMed

    Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini

    2011-09-15

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.

  12. Automatic decoding of facial movements reveals deceptive pain expressions

    PubMed Central

    Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang

    2014-01-01

    Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830

  13. Facial expression categorization by chimpanzees using standardized stimuli.

    PubMed

    Parr, Lisa A; Waller, Bridget M; Heintz, Matthew

    2008-04-01

    The ability to recognize and accurately interpret facial expressions are critical social cognition skills in primates, yet very few studies have examined how primates discriminate these social signals and which features are the most salient. Four experiments examined chimpanzee facial expression processing using a set of standardized, prototypical stimuli created using the new ChimpFACS coding system. First, chimpanzees were found to accurately discriminate between these expressions using a computerized matching-to-sample task, and recognition was impaired for all but one expression category when they were inverted. Third, a multidimensional scaling analysis examined the perceived dissimilarity among these facial expressions revealing 2 main dimensions, the degree of mouth closure and extent of lip-puckering and retraction. Finally, subjects were asked to match each facial expression category using only individual component features. For each expression category, at least 1 component movement was more salient or representative of that expression than the others. However, these were not necessarily the only movements implicated in subject's overall pattern of errors. Therefore, similar to humans, both configuration and component movements are important during chimpanzee facial expression processing. PMID:18410196

  14. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  15. Does the Organization of Emotional Expression Change over Time? Facial Expressivity from 4 to 12 Months

    ERIC Educational Resources Information Center

    Bennett, David S.; Bendersky, Margaret; Lewis, Michael

    2005-01-01

    Differentiation models contend that the organization of facial expressivity increases during infancy. Accordingly, infants are believed to exhibit increasingly specific facial expressions in response to stimuli as a function of development. This study tested this hypothesis in a sample of 151 infants (83 boys and 68 girls) observed in 4 situations…

  16. Rapid amygdala gamma oscillations in response to fearful facial expressions.

    PubMed

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Matsuda, Kazumi; Usui, Keiko; Inoue, Yushi; Toichi, Motomi

    2011-03-01

    Neuroimaging studies have reported greater activation of the human amygdala in response to emotional facial expressions, especially for fear. However, little is known about how fast this activation occurs. We investigated this issue by recording the intracranial field potentials of the amygdala in subjects undergoing pre-neurosurgical assessment (n=6). The subjects observed fearful, happy, and neutral facial expressions. Time-frequency statistical parametric mapping analyses revealed that the amygdala showed greater gamma-band activity in response to fearful compared with neutral facial expressions at 50-150 ms, with a peak at 135 ms. These results indicate that the human amygdala is able to rapidly process fearful facial expressions. PMID:21182851

  17. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-01

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently.

  18. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  19. Cognitive penetrability and emotion recognition in human facial expressions.

    PubMed

    Marchi, Francesco; Newen, Albert

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion.

  20. Allometry of facial mobility in anthropoid primates: implications for the evolution of facial expression.

    PubMed

    Dobson, Seth D

    2009-01-01

    Body size may be an important factor influencing the evolution of facial expression in anthropoid primates due to allometric constraints on the perception of facial movements. Given this hypothesis, I tested the prediction that observed facial mobility is positively correlated with body size in a comparative sample of nonhuman anthropoids. Facial mobility, or the variety of facial movements a species can produce, was estimated using a novel application of the Facial Action Coding System (FACS). I used FACS to estimate facial mobility in 12 nonhuman anthropoid species, based on video recordings of facial activity in zoo animals. Body mass data were taken from the literature. I used phylogenetic generalized least squares (PGLS) to perform a multiple regression analysis with facial mobility as the dependent variable and two independent variables: log body mass and dummy-coded infraorder. Together, body mass and infraorder explain 92% of the variance in facial mobility. However, the partial effect of body mass is much stronger than for infraorder. The results of my study suggest that allometry is an important constraint on the evolution of facial mobility, which may limit the complexity of facial expression in smaller species. More work is needed to clarify the perceptual bases of this allometric pattern.

  1. Facial expression recognition in rhesus monkeys, Macaca mulatta.

    PubMed

    Parr, Lisa A; Heintz, Matthew

    2009-06-01

    The ability to recognize and accurately interpret facial expressions is critically important for nonhuman primates that rely on these nonverbal signals for social communication. Despite this, little is known about how nonhuman primates, particularly monkeys, discriminate between facial expressions. In the present study, seven rhesus monkeys were required to discriminate four categories of conspecific facial expressions using a matching-to-sample task. In experiment 1, the matching pair showed identical photographs of facial expressions, paired with every other expression type as the nonmatch. The identity of the nonmatching stimulus monkey differed from the one in the sample. Subjects performed above chance on session 1, with no difference in performance across the four expression types. In experiment 2, the identity of all three monkeys differed in each trial, and a neutral portrait was also included as the nonmatching stimulus. Monkeys discriminated expressions across individual identity when the non-match was a neutral stimulus, but they had difficulty when the nonmatch was another expression type. We analysed the degree to which specific feature redundancy could account for these error patterns using a multidimensional scaling analysis which plotted the perceived dissimilarity between expression dyads along a two-dimensional axis. One axis appeared to represent mouth shape, stretched open versus funnelled, while the other appeared to represent a combination of lip retraction and mouth opening. These features alone, however, could not account for overall performance and suggest that monkeys do not rely solely on distinctive features to discriminate among different expressions. PMID:20228886

  2. Learning Multiscale Active Facial Patches for Expression Analysis.

    PubMed

    Zhong, Lin; Liu, Qingshan; Yang, Peng; Huang, Junzhou; Metaxas, Dimitris N

    2015-08-01

    In this paper, we present a new idea to analyze facial expression by exploring some common and specific information among different expressions. Inspired by the observation that only a few facial parts are active in expression disclosure (e.g., around mouth, eye), we try to discover the common and specific patches which are important to discriminate all the expressions and only a particular expression, respectively. A two-stage multitask sparse learning (MTSL) framework is proposed to efficiently locate those discriminative patches. In the first stage MTSL, expression recognition tasks are combined to located common patches. Each of the tasks aims to find dominant patches for each expression. Secondly, two related tasks, facial expression recognition and face verification tasks, are coupled to learn specific facial patches for individual expression. The two-stage patch learning is performed on patches sampled by multiscale strategy. Extensive experiments validate the existence and significance of common and specific patches. Utilizing these learned patches, we achieve superior performances on expression recognition compared to the state-of-the-arts. PMID:25291808

  3. Structure-preserving sparse decomposition for facial expression analysis.

    PubMed

    Taheri, Sima; Qiang Qiu; Chellappa, Rama

    2014-08-01

    Although facial expressions can be decomposed in terms of action units (AUs) as suggested by the facial action coding system, there have been only a few attempts that recognize expression using AUs and their composition rules. In this paper, we propose a dictionary-based approach for facial expression analysis by decomposing expressions in terms of AUs. First, we construct an AU-dictionary using domain experts' knowledge of AUs. To incorporate the high-level knowledge regarding expression decomposition and AUs, we then perform structure-preserving sparse coding by imposing two layers of grouping over AU-dictionary atoms as well as over the test image matrix columns. We use the computed sparse code matrix for each expressive face to perform expression decomposition and recognition. Since domain experts' knowledge may not always be available for constructing an AU-dictionary, we also propose a structure-preserving dictionary learning algorithm, which we use to learn a structured dictionary as well as divide expressive faces into several semantic regions. Experimental results on publicly available expression data sets demonstrate the effectiveness of the proposed approach for facial expression analysis.

  4. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  5. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  6. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  7. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  8. Improved categorization of subtle facial expressions modulates Late Positive Potential.

    PubMed

    Pollux, P M J

    2016-05-13

    Biases in facial expression recognition can be reduced successfully using feedback-based training tasks. Here we investigate with event-related potentials (ERPs) at which stages of stimulus processing emotion-related modulations are influenced by training. Categorization of subtle facial expressions (morphed from neutral to happy, sad or surprise) was trained with correct-response feedback on each trial. ERPs were recorded before and after training while participants categorized facial expressions without response feedback. Behavioral data demonstrated large improvements in categorization of subtle facial expression which transferred to new face models not used during training. ERPs were modulated by training from 450 ms post-stimulus onward, characterized by a more gradual increase in P3b/Late Positive Potential (LPP) amplitude as expression intensity increased. This effect was indistinguishable for faces used for training and for new faces. It was proposed that training elicited a more fine-grained analysis of facial information for all subtle expressions, resulting in improved recognition and enhanced emotional motivational salience (reflected in P3b/LPP amplitude) of faces previously categorized as expressing no emotion. PMID:26912280

  9. Improved categorization of subtle facial expressions modulates Late Positive Potential.

    PubMed

    Pollux, P M J

    2016-05-13

    Biases in facial expression recognition can be reduced successfully using feedback-based training tasks. Here we investigate with event-related potentials (ERPs) at which stages of stimulus processing emotion-related modulations are influenced by training. Categorization of subtle facial expressions (morphed from neutral to happy, sad or surprise) was trained with correct-response feedback on each trial. ERPs were recorded before and after training while participants categorized facial expressions without response feedback. Behavioral data demonstrated large improvements in categorization of subtle facial expression which transferred to new face models not used during training. ERPs were modulated by training from 450 ms post-stimulus onward, characterized by a more gradual increase in P3b/Late Positive Potential (LPP) amplitude as expression intensity increased. This effect was indistinguishable for faces used for training and for new faces. It was proposed that training elicited a more fine-grained analysis of facial information for all subtle expressions, resulting in improved recognition and enhanced emotional motivational salience (reflected in P3b/LPP amplitude) of faces previously categorized as expressing no emotion.

  10. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    PubMed

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  11. Training facial expression production in children on the autism spectrum.

    PubMed

    Gordon, Iris; Pierce, Matthew D; Bartlett, Marian S; Tanaka, James W

    2014-10-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer recognition system that analyzes the child's facial expression in real time. Before and after playing the Angry and Happy versions of FaceMaze, children posed "happy" and "angry" expressions. Naïve raters judged the post-FaceMaze "happy" and "angry" expressions of the ASD group as higher in quality than their pre-FaceMaze productions. Moreover, the post-game expressions of the ASD group were rated as equal in quality as the expressions of the TD group. PMID:24777287

  12. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  13. Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain

    PubMed Central

    Harris, Richard J.; Young, Andrew W.; Andrews, Timothy J.

    2012-01-01

    Whether the brain represents facial expressions as perceptual continua or as emotion categories remains controversial. Here, we measured the neural response to morphed images to directly address how facial expressions of emotion are represented in the brain. We found that face-selective regions in the posterior superior temporal sulcus and the amygdala responded selectively to changes in facial expression, independent of changes in identity. We then asked whether the responses in these regions reflected categorical or continuous neural representations of facial expression. Participants viewed images from continua generated by morphing between faces posing different expressions such that the expression could be the same, could involve a physical change but convey the same emotion, or could differ by the same physical amount but be perceived as two different emotions. We found that the posterior superior temporal sulcus was equally sensitive to all changes in facial expression, consistent with a continuous representation. In contrast, the amygdala was only sensitive to changes in expression that altered the perceived emotion, demonstrating a more categorical representation. These results offer a resolution to the controversy about how facial expression is processed in the brain by showing that both continuous and categorical representations underlie our ability to extract this important social cue. PMID:23213218

  14. Training Facial Expression Production in Children on the Autism Spectrum

    ERIC Educational Resources Information Center

    Gordon, Iris; Pierce, Matthew D.; Bartlett, Marian S.; Tanaka, James W.

    2014-01-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer…

  15. Detecting deception in facial expressions of pain: accuracy and training.

    PubMed

    Hill, Marilyn L; Craig, Kenneth D

    2004-01-01

    Clinicians tend to assign greater weight to nonverbal expression than to patient self-report when judging the location and severity of pain. However, patients can be successful at dissimulating facial expressions of pain, as posed expressions resemble genuine expressions in the frequency and intensity of pain-related facial actions. The present research examined individual differences in the ability to discriminate genuine and deceptive facial pain displays and whether different models of training in cues to deception would improve detection skills. Judges (60 male, 60 female) were randomly assigned to 1 of 4 experimental groups: 1) control; 2) corrective feedback; 3) deception training; and 4) deception training plus feedback. Judges were shown 4 videotaped facial expressions for each chronic pain patient: neutral expressions, genuine pain instigated by physiotherapy range of motion assessment, masked pain, and faked pain. For each condition, the participants rated pain intensity and unpleasantness, decided which category each of the 4 video clips represented, and described cues they used to arrive at decisions. There were significant individual differences in accuracy, with females more accurate than males, but accuracy was unrelated to past pain experience, empathy, or the number or type of facial cues used. Immediate corrective feedback led to significant improvements in participants' detection accuracy, whereas there was no support for the use of an information-based training program. PMID:15502685

  16. Distinct temporal processing of task-irrelevant emotional facial expressions.

    PubMed

    de Jong, Peter J; Koster, Ernst H W; Wessel, Ineke; Martens, Sander

    2014-02-01

    There is an ongoing debate concerning the extent to which emotional faces automatically attract attention. Using a single-target Rapid Serial Visual Presentation (RSVP) methodology, it has been found that presentation of task-irrelevant positive or negative emotionally salient stimuli (e.g., negative scenes or erotic pictures) results in a temporary inability to process target stimuli (emotion-induced blindness). In the present study, we sought to examine emotion-induced blindness effects for negative (angry) and positive (happy) facial expressions. Interestingly, task-irrelevant emotional facial expressions facilitated, rather than impaired, target detection when presented in close temporal proximity of the target. Similar facilitation effects were absent for neutral faces or rotated neutral faces that were both included as control stimuli. These results indicate a distinct temporal processing of emotional facial expressions, which accords well with the signal value of emotional expressions in interpersonal situations. PMID:24188063

  17. Fast and Accurate Digital Morphometry of Facial Expressions.

    PubMed

    Grewe, Carl Martin; Schreiber, Lisa; Zachow, Stefan

    2015-10-01

    Facial surgery deals with a part of the human body that is of particular importance in everyday social interactions. The perception of a person's natural, emotional, and social appearance is significantly influenced by one's expression. This is why facial dynamics has been increasingly studied by both artists and scholars since the mid-Renaissance. Currently, facial dynamics and their importance in the perception of a patient's identity play a fundamental role in planning facial surgery. Assistance is needed for patient information and communication, and documentation and evaluation of the treatment as well as during the surgical procedure. Here, the quantitative assessment of morphological features has been facilitated by the emergence of diverse digital imaging modalities in the last decades. Unfortunately, the manual data preparation usually needed for further quantitative analysis of the digitized head models (surface registration, landmark annotation) is time-consuming, and thus inhibits its use for treatment planning and communication. In this article, we refer to historical studies on facial dynamics, briefly present related work from the field of facial surgery, and draw implications for further developments in this context. A prototypical stereophotogrammetric system for high-quality assessment of patient-specific 3D dynamic morphology is described. An individual statistical model of several facial expressions is computed, and possibilities to address a broad range of clinical questions in facial surgery are demonstrated.

  18. Emotional facial expressions reduce neural adaptation to face identity.

    PubMed

    Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R

    2014-05-01

    In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing.

  19. Emotional facial expressions reduce neural adaptation to face identity.

    PubMed

    Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R

    2014-05-01

    In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing. PMID:23512931

  20. Extreme Facial Expressions Classification Based on Reality Parameters

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  1. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  2. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues. PMID:21648329

  3. Development of a System for Automatic Facial Expression Analysis

    NASA Astrophysics Data System (ADS)

    Diago, Luis A.; Kitaoka, Tetsuko; Hagiwara, Ichiro

    Automatic recognition of facial expressions can be an important component of natural human-machine interactions. While a lot of samples are desirable for estimating more accurately the feelings of a person (e.g. likeness) about a machine interface, in real world situation, only a small number of samples must be obtained because the high cost in collecting emotions from observed person. This paper proposes a system that solves this problem conforming to individual differences. A new method is developed for facial expression classification based on the combination of Holographic Neural Networks (HNN) and Type-2 Fuzzy Logic. For the recognition of emotions induced by facial expressions, compared with former HNN and Support Vector Machines (SVM) classifiers, proposed method achieved the best generalization performance using less learning time than SVM classifiers.

  4. Language and affective facial expression in children with perinatal stroke

    PubMed Central

    Lai, Philip T.; Reilly, Judy S.

    2015-01-01

    Children with perinatal stroke (PS) provide a unique opportunity to understand developing brain-behavior relations. Previous research has noted distinctive differences in behavioral sequelae between children with PS and adults with acquired stroke: children fare better, presumably due to the plasticity of the developing brain for adaptive reorganization. Whereas we are beginning to understand language development, we know little about another communicative domain, emotional expression. The current study investigates the use and integration of language and facial expression during an interview. As anticipated, the language performance of the five and six year old PS group is comparable to their typically developing (TD) peers, however, their affective profiles are distinctive: those with right hemisphere injury are less expressive with respect to affective language and affective facial expression than either those with left hemisphere injury or TD group. The two distinctive profiles for language and emotional expression in these children suggest gradients of neuroplasticity in the developing brain. PMID:26117314

  5. Facial patterning and infant emotional expression: happiness, surprise, and fear.

    PubMed

    Hiatt, S W; Campos, J J; Emde, R N

    1979-12-01

    Although recent studies have convincingly demonstrated that emotional expressions can be judged reliably from actor-posed facial displays, there exists little evidence that facial expressions in lifelike settings are similar to actor-posed displays, are reliable across situations designed to elicit the same emotion, or provide sufficient information to mediate consistent emotion judgments by raters. The present study therefore investigated these issues as they related to the emotions of happiness, surprise, and fear. 27 infants between 10 and 12 months of age (when emotion masking is not likely to confound results) were tested in 2 situations designed to elicit hapiness (peek-a-boo game and a collapsing toy), 2 to elicit surprise (a toy-switch and a vanishing-object task), and 2 to elicit fear (the visual cliff and the approach of a stranger. Dependent variables included changes in 28 facial response components taken from previous work using actor poses, as well as judgments of the presence of 6 discrete emotions. In addition, instrumental behaviors were used to verify with other than facial expression responses whether the predicted emotion was elicited. In contrast to previous conclusions on the subject, we found that judges were able to make all facial expression judgments reliably, even in the absence of contextual information. Support was also obtained for at least some degree of specificity of facial component response patterns, especially for happiness and surprise. Emotion judgments by raters were found to be a function of the presence of discrete facial components predicted to be linked to those emotions. Finally, almost all situations elicited blends, rather than discrete emotions. PMID:535426

  6. The Enfacement Illusion Is Not Affected by Negative Facial Expressions

    PubMed Central

    Beck, Brianna; Cardini, Flavia; Làdavas, Elisabetta; Bertini, Caterina

    2015-01-01

    Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one’s own face to assimilate another person’s face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer’s motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant’s own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other’s face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing. PMID:26291532

  7. Interference between conscious and unconscious facial expression information.

    PubMed

    Ye, Xing; He, Sheng; Hu, Ying; Yu, Yong Qiang; Wang, Kai

    2014-01-01

    There is ample evidence to show that many types of visual information, including emotional information, could be processed in the absence of visual awareness. For example, it has been shown that masked subliminal facial expressions can induce priming and adaptation effects. However, stimulus made invisible in different ways could be processed to different extent and have differential effects. In this study, we adopted a flanker type behavioral method to investigate whether a flanker rendered invisible through Continuous Flash Suppression (CFS) could induce a congruency effect on the discrimination of a visible target. Specifically, during the experiment, participants judged the expression (either happy or fearful) of a visible face in the presence of a nearby invisible face (with happy or fearful expression). Results show that participants were slower and less accurate in discriminating the expression of the visible face when the expression of the invisible flanker face was incongruent. Thus, facial expression information rendered invisible with CFS and presented a different spatial location could enhance or interfere with consciously processed facial expression information. PMID:25162153

  8. The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

    PubMed

    Beck, Brianna; Cardini, Flavia; Làdavas, Elisabetta; Bertini, Caterina

    2015-01-01

    Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

  9. Facial expression recognition based on improved DAGSVM

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Cui, Ye; Zhang, Yi

    2014-11-01

    For the cumulative error problem because of randomization sequence of traditional DAGSVM(Directed Acyclic Graph Support Vector Machine) classification, this paper presents an improved DAGSVM expression recognition method. The method uses the distance of class and the standard deviation as the measure of the classer, which minimize the error rate of the upper structure of the classification. At the same time, this paper uses the method which combines discrete cosine transform (Discrete Cosine Transform, DCT) with Local Binary Pattern(Local Binary Pattern - LBP) ,to extract expression feature and be the input to improve the DAGSVM classifier for recognition. Experimental results show that compared with other multi-class support vector machine method, improved DAGSVM classifier can achieve higher recognition rate. And when it's used at the platform of the intelligent wheelchair, experiments show that the method has a better robustness.

  10. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. PMID:26876363

  11. Drug effects on responses to emotional facial expressions: recent findings.

    PubMed

    Miller, Melissa A; Bershad, Anya K; de Wit, Harriet

    2015-09-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally.

  12. Drug effects on responses to emotional facial expressions: recent findings

    PubMed Central

    Miller, Melissa A.; Bershad, Anya K.; de Wit, Harriet

    2016-01-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144

  13. Drug effects on responses to emotional facial expressions: recent findings.

    PubMed

    Miller, Melissa A; Bershad, Anya K; de Wit, Harriet

    2015-09-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144

  14. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  15. Teachers' Perception Regarding Facial Expressions as an Effective Teaching Tool

    ERIC Educational Resources Information Center

    Butt, Muhammad Naeem; Iqbal, Mohammad

    2011-01-01

    The major objective of the study was to explore teachers' perceptions about the importance of facial expression in the teaching-learning process. All the teachers of government secondary schools constituted the population of the study. A sample of 40 teachers, both male and female, in rural and urban areas of district Peshawar, were selected…

  16. Categorical Representation of Facial Expressions in the Infant Brain

    ERIC Educational Resources Information Center

    Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.

    2009-01-01

    Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…

  17. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  18. Specificity of Facial Expression Labeling Deficits in Childhood Psychopathology

    ERIC Educational Resources Information Center

    Guyer, Amanda E.; McClure, Erin B.; Adler, Abby D.; Brotman, Melissa A.; Rich, Brendan A.; Kimes, Alane S.; Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen

    2007-01-01

    Background: We examined whether face-emotion labeling deficits are illness-specific or an epiphenomenon of generalized impairment in pediatric psychiatric disorders involving mood and behavioral dysregulation. Method: Two hundred fifty-two youths (7-18 years old) completed child and adult facial expression recognition subtests from the Diagnostic…

  19. The contribution of different facial regions to the recognition of conversational expressions.

    PubMed

    Nusseck, Manfred; Cunningham, Douglas W; Wallraven, Christian; Bülthoff, Heinrich H

    2008-01-01

    The human face is an important and complex communication channel. Humans can, however, easily read in a face not only identity information but also facial expressions with high accuracy. Here, we present the results of four psychophysical experiments in which we systematically manipulated certain facial areas in video sequences of nine conversational expressions to investigate recognition performance and its dependency on the motions of different facial parts. The results help to demonstrate what information is perceptually necessary and sufficient to recognize the different facial expressions. Subsequent analyses of the facial movements and correlation with recognition performance show that, for some expressions, one individual facial region can represent the whole expression. In other cases, the interaction of more than one facial area is needed to clarify the expression. The full set of results is used to develop a systematic description of the roles of different facial parts in the visual perception of conversational facial expressions. PMID:18831624

  20. Errors in identifying and expressing emotion in facial expressions, voices, and postures unique to social anxiety.

    PubMed

    Walker, Amy S; Nowicki, Stephen; Jones, Jeffrey; Heimann, Lisa

    2011-01-01

    The purpose of the present study was to see if 7-10-year-old socially anxious children (n = 26) made systematic errors in identifying and sending emotions in facial expressions, paralanguage, and postures as compared with the more random errors of children who were inattentive-hyperactive (n = 21). It was found that socially anxious children made more errors in identifying anger and fear in children's facial expressions and anger in adults' postures and in expressing anger in their own facial expressions than did their inattentive-hyperactive peers. Results suggest that there may be systematic difficulties specifically in visual nonverbal emotion communication that contribute to the personal and social difficulties socially anxious children experience. PMID:21902007

  1. Errors in identifying and expressing emotion in facial expressions, voices, and postures unique to social anxiety.

    PubMed

    Walker, Amy S; Nowicki, Stephen; Jones, Jeffrey; Heimann, Lisa

    2011-01-01

    The purpose of the present study was to see if 7-10-year-old socially anxious children (n = 26) made systematic errors in identifying and sending emotions in facial expressions, paralanguage, and postures as compared with the more random errors of children who were inattentive-hyperactive (n = 21). It was found that socially anxious children made more errors in identifying anger and fear in children's facial expressions and anger in adults' postures and in expressing anger in their own facial expressions than did their inattentive-hyperactive peers. Results suggest that there may be systematic difficulties specifically in visual nonverbal emotion communication that contribute to the personal and social difficulties socially anxious children experience.

  2. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    PubMed

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face.

  3. Production and discrimination of facial expressions by preschool children.

    PubMed

    Field, T M; Walden, T A

    1982-10-01

    Production and discrimination of the 8 basic facial expressions were investigated among 34 3-5-year-old preschool children. The children's productions were elicited and videotaped under 4 different prompt conditions (imitation of photographs of children's facial expressions, imitation of those in front of a mirror, imitation of those when given labels for the expressions, and when given only labels). Adults' "guesses" of the children's productions as well as the children's guesses of their own expressions on videotape were more accurate for the happy than afraid or angry expressions and for those expressions elicited during the imitation conditions. Greater accuracy of guessing by the adult than the child suggests that the children's productions were superior to their discriminations, although these skills appeared to be related. Children's production skills were also related to sociometric ratings by their peers and expressivity ratings by their teachers. These were not related to the child's age and only weakly related to the child's expressivity during classroom free-play observations. PMID:7140433

  4. Covert processing of facial expressions by people with Williams syndrome.

    PubMed

    Levy, Yonata; Pluber, Hadas; Bentin, Shlomo

    2011-01-01

    Although individuals with Williams Syndrome (WS) are empathic and sociable and perform relatively well on face recognition tasks, they perform poorly on tasks of facial expression recognition. The current study sought to investigate this seeming inconsistency. Participants were tested on a Garner-type matching paradigm in which identities and expressions were manipulated simultaneously as the relevant or irrelevant dimensions. Performance of people with WS on the expression-matching task was poor and relied primarily on facilitation afforded by congruent identities. Performance on the identity matching task came close to the level of performance of matched controls and was significantly facilitated by congruent expressions. We discuss potential accounts for the discrepant processing of expressions in the task-relevant (overt) and task-irrelevant (covert) conditions, expanding on the inherently semantic-conceptual nature of overt expression matching and its dependence on general cognitive level. PMID:19853248

  5. Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities.

    PubMed

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD subtypes. This study examined abilities of adolescents with NVLD, with general learning disabilities (GLD), and without LD to recognize, express, and understand facial expressions of emotion. Adolescents were grouped into those with NVLD, with GLD, and without LD using the Wechsler Intelligence Scale for Children-Third Edition (short form) and Wide Range Achievement Test-Third Edition. The adolescents completed neuropsychological, recognition, expression, and understanding measures. It is intriguing that the GLD group was significantly less accurate at recognizing and understanding facial expressions compared with the NVLD and NLD groups, who did not differ. Implications are explored with regard to the need to consider possible deficits in recognition and understanding of emotion in adolescents with LD in schools.

  6. Interaction of facial expressions and familiarity: ERP evidence.

    PubMed

    Wild-Wall, Nele; Dimigen, Olaf; Sommer, Werner

    2008-02-01

    There is mounting evidence that under some conditions the processing of facial identity and facial emotional expressions may not be independent; however, the nature of this interaction remains to be established. By using event-related brain potentials (ERP) we attempted to localize these interactions within the information processing system. During an expression discrimination task (Experiment 1) categorization was faster for portraits of personally familiar vs. unfamiliar persons displaying happiness. The peak latency of the P300 (trend) and the onset of the stimulus-locked LRP were shorter for familiar than unfamiliar faces. This implies a late perceptual but pre-motoric locus of the facilitating effect of familiarity on expression categorization. In Experiment 2 participants performed familiarity decisions about portraits expressing different emotions. Results revealed an advantage of happiness over disgust specifically for familiar faces. The facilitation was localized in the response selection stage as suggested by a shorter onset of the LRP. Both experiments indicate that familiarity and facial expression may not be independent processes. However, depending on the kind of decision different processing stages may be facilitated for happy familiar faces. PMID:17997008

  7. Feature selection for facial expression recognition using deformation modeling

    NASA Astrophysics Data System (ADS)

    Srivastava, Ruchir; Sim, Terence; Yan, Shuicheng; Ranganath, Surendra

    2010-02-01

    Works on Facial Expression Recognition (FER) have mostly been done using image based approaches. However, in recent years, researchers have also been trying to explore the use of 3D information for the task of FER. Most of the time, there is a need for having a neutral (expressionless) face of the subject in both the image based and 3D model based approaches. However, this might not be practical in many applications. This paper tries to address this limitations in previous works by proposing a novel technique of feature extraction which does not require any neutral face of the subjects. It has been proposed and validated experimentally that the motion of some landmark points on the face, in exhibiting a particular facial expression, is similar in different persons. Separate classifier is made and relevant feature points are selected for each expression. One vs all SVM classification gives promising results.

  8. Forming impressions: effects of facial expression and gender stereotypes.

    PubMed

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  9. Forming impressions: effects of facial expression and gender stereotypes.

    PubMed

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces. PMID:24897907

  10. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  11. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  12. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.

  13. Classification of dynamic facial expressions of emotion presented briefly.

    PubMed

    Recio, Guillermo; Schacht, Annekathrin; Sommer, Werner

    2013-01-01

    A number of studies have shown an impact of speed of a developing facial expression of emotion on its recognition and perceived naturalness. Still, the impact of speed at constant, short presentation times, as normally used in many experiments is unclear. In the present study participants classified faces displaying facial expressions of six basic emotions in static and dynamic presentation modes and three different types of neutral movements. Stimuli were created with computer software that allows fine-grained control over action units and dynamic features. Rise times in dynamic expressions varied between 200 and 900 ms. Results replicated classical findings showing better performance for expressions of happiness, and frequent confusions among morphologically similar expressions, and a general dynamic facilitation for most expressions. Importantly, dynamic presentation as such facilitated a more accurate classification, but variations in speed at the fast range studied here had no noticeable effect for expressions of anger, fear, happiness, and surprise. The main exception was sadness, which was best recognised at slow speed and in static pictures, and disgust, which was most unambiguously categorised at fast to moderate speed.

  14. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    ERIC Educational Resources Information Center

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  15. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    ERIC Educational Resources Information Center

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  16. Neural processing of dynamic emotional facial expressions in psychopaths.

    PubMed

    Decety, Jean; Skelly, Laurie; Yoder, Keith J; Kiehl, Kent A

    2014-02-01

    Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, 80 incarcerated adult males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent functional magnetic resonance imaging (fMRI) scanning while viewing dynamic facial expressions of fear, sadness, happiness, and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, and superior temporal sulcus (STS)) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex (OFC)), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness, and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex (vmPFC), regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions. PMID:24359488

  17. Looking with different eyes: The psychological meaning of categorisation goals moderates facial reactivity to facial expressions.

    PubMed

    van Dillen, Lotte F; Harris, Lasana T; van Dijk, Wilco W; Rotteveel, Mark

    2015-01-01

    In the present research we examined whether the psychological meaning of people's categorisation goals affects facial muscle activity in response to facial expressions of emotion. We had participants associate eye colour (blue, brown) with either a personality trait (extraversion) or a physical trait (light frequency) and asked them to use these associations in a speeded categorisation task of angry, disgusted, happy and neutral faces while assessing participants' response times and facial muscle activity. We predicted that participants would respond differentially to the emotional faces when the categorisation criteria allowed for inferences about a target's thoughts, feelings or behaviour (i.e., when categorising extraversion), but not when these lacked any social meaning (i.e., when categorising light frequency). Indeed, emotional faces triggered facial reactions to facial expressions when participants categorised extraversion, but not when they categorised light frequency. In line with this, only when categorising extraversion did participants' response times indicate a negativity bias replicating previous results. Together, these findings provide further evidence for the contextual nature of people's selective responses to the emotions expressed by others. PMID:25435404

  18. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    PubMed Central

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  19. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    NASA Astrophysics Data System (ADS)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  20. Facial expressions of emotion are not culturally universal.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-01

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  1. Three-year-olds' rapid facial electromyographic responses to emotional facial expressions and body postures.

    PubMed

    Geangu, Elena; Quadrelli, Ermanno; Conte, Stefania; Croci, Emanuela; Turati, Chiara

    2016-04-01

    Rapid facial reactions (RFRs) to observed emotional expressions are proposed to be involved in a wide array of socioemotional skills, from empathy to social communication. Two of the most persuasive theoretical accounts propose RFRs to rely either on motor resonance mechanisms or on more complex mechanisms involving affective processes. Previous studies demonstrated that presentation of facial and bodily expressions can generate rapid changes in adult and school-age children's muscle activity. However, to date there is little to no evidence to suggest the existence of emotional RFRs from infancy to preschool age. To investigate whether RFRs are driven by motor mimicry or could also be a result of emotional appraisal processes, we recorded facial electromyographic (EMG) activation from the zygomaticus major and frontalis medialis muscles to presentation of static facial and bodily expressions of emotions (i.e., happiness, anger, fear, and neutral) in 3-year-old children. Results showed no specific EMG activation in response to bodily emotion expressions. However, observing others' happy faces led to increased activation of the zygomaticus major and decreased activation of the frontalis medialis, whereas observing others' angry faces elicited the opposite pattern of activation. This study suggests that RFRs are the result of complex mechanisms in which both affective processes and motor resonance may play an important role.

  2. Affective priming using facial expressions modulates liking for abstract art.

    PubMed

    Flexas, Albert; Rosselló, Jaume; Christensen, Julia F; Nadal, Marcos; Olivera La Rosa, Antonio; Munar, Enric

    2013-01-01

    We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion) were presented under brief (Stimulus Onset Asynchrony, SOA = 20 ms) and extended (SOA = 300 ms) conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.

  3. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    PubMed

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia.

  4. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    PubMed

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia. PMID:26017202

  5. Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders.

    PubMed

    Brewer, Rebecca; Biotti, Federica; Catmur, Caroline; Press, Clare; Happé, Francesca; Cook, Richard; Bird, Geoffrey

    2016-02-01

    The difficulties encountered by individuals with autism spectrum disorder (ASD) when interacting with neurotypical (NT, i.e. nonautistic) individuals are usually attributed to failure to recognize the emotions and mental states of their NT interaction partner. It is also possible, however, that at least some of the difficulty is due to a failure of NT individuals to read the mental and emotional states of ASD interaction partners. Previous research has frequently observed deficits of typical facial emotion recognition in individuals with ASD, suggesting atypical representations of emotional expressions. Relatively little research, however, has investigated the ability of individuals with ASD to produce recognizable emotional expressions, and thus, whether NT individuals can recognize autistic emotional expressions. The few studies which have investigated this have used only NT observers, making it impossible to determine whether atypical representations are shared among individuals with ASD, or idiosyncratic. This study investigated NT and ASD participants' ability to recognize emotional expressions produced by NT and ASD posers. Three posing conditions were included, to determine whether potential group differences are due to atypical cognitive representations of emotion, impaired understanding of the communicative value of expressions, or poor proprioceptive feedback. Results indicated that ASD expressions were recognized less well than NT expressions, and that this is likely due to a genuine deficit in the representation of typical emotional expressions in this population. Further, ASD expressions were equally poorly recognized by NT individuals and those with ASD, implicating idiosyncratic, rather than common, atypical representations of emotional expressions in ASD.

  6. 4-D facial expression recognition by learning geometric deformations.

    PubMed

    Ben Amor, Boulbaba; Drira, Hassen; Berretti, Stefano; Daoudi, Mohamed; Srivastava, Anuj

    2014-12-01

    In this paper, we present an automatic approach for facial expression recognition from 3-D video sequences. In the proposed solution, the 3-D faces are represented by collections of radial curves and a Riemannian shape analysis is applied to effectively quantify the deformations induced by the facial expressions in a given subsequence of 3-D frames. This is obtained from the dense scalar field, which denotes the shooting directions of the geodesic paths constructed between pairs of corresponding radial curves of two faces. As the resulting dense scalar fields show a high dimensionality, Linear Discriminant Analysis (LDA) transformation is applied to the dense feature space. Two methods are then used for classification: 1) 3-D motion extraction with temporal Hidden Markov model (HMM) and 2) mean deformation capturing with random forest. While a dynamic HMM on the features is trained in the first approach, the second one computes mean deformations under a window and applies multiclass random forest. Both of the proposed classification schemes on the scalar fields showed comparable results and outperformed earlier studies on facial expression recognition from 3-D video sequences.

  7. Featural processing in recognition of emotional facial expressions.

    PubMed

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  8. The effect of sad facial expressions on weight judgment

    PubMed Central

    Weston, Trent D.; Hass, Norah C.; Lim, Seung-Lark

    2015-01-01

    Although the body weight evaluation (e.g., normal or overweight) of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes toward obese persons. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged. PMID:25914669

  9. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations. PMID:17029018

  10. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  11. The use of facial expressions for pain assessment purposes in dementia: a narrative review.

    PubMed

    Oosterman, Joukje M; Zwakhalen, Sandra; Sampson, Elizabeth L; Kunz, Miriam

    2016-04-01

    Facial expressions convey reliable nonverbal signals about pain and thus are very useful for assessing pain in patients with limited communicative ability, such as patients with dementia. In this review, we present an overview of the available pain observation tools and how they make use of facial expressions. Utility and reliability of facial expressions to measure pain in dementia are discussed, together with the effect of dementia severity on these facial expressions. Next, we present how behavioral alterations may overlap with facial expressions of pain, and may even influence the extent to which pain is facially expressed. The main focus is on disinhibition, apathy and emotional changes. Finally, an overview of theoretical considerations and practical implications is presented for assessing pain using facial expressions in clinical settings. PMID:27032976

  12. The use of facial expressions for pain assessment purposes in dementia: a narrative review.

    PubMed

    Oosterman, Joukje M; Zwakhalen, Sandra; Sampson, Elizabeth L; Kunz, Miriam

    2016-04-01

    Facial expressions convey reliable nonverbal signals about pain and thus are very useful for assessing pain in patients with limited communicative ability, such as patients with dementia. In this review, we present an overview of the available pain observation tools and how they make use of facial expressions. Utility and reliability of facial expressions to measure pain in dementia are discussed, together with the effect of dementia severity on these facial expressions. Next, we present how behavioral alterations may overlap with facial expressions of pain, and may even influence the extent to which pain is facially expressed. The main focus is on disinhibition, apathy and emotional changes. Finally, an overview of theoretical considerations and practical implications is presented for assessing pain using facial expressions in clinical settings.

  13. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  14. Lateralization for dynamic facial expressions in human superior temporal sulcus.

    PubMed

    De Winter, François-Laurent; Zhu, Qi; Van den Stock, Jan; Nelissen, Koen; Peeters, Ronald; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2015-02-01

    Most face processing studies in humans show stronger activation in the right compared to the left hemisphere. Evidence is largely based on studies with static stimuli focusing on the fusiform face area (FFA). Hence, the pattern of lateralization for dynamic faces is less clear. Furthermore, it is unclear whether this property is common to human and non-human primates due to predisposing processing strategies in the right hemisphere or that alternatively left sided specialization for language in humans could be the driving force behind this phenomenon. We aimed to address both issues by studying lateralization for dynamic facial expressions in monkeys and humans. Therefore, we conducted an event-related fMRI experiment in three macaques and twenty right handed humans. We presented human and monkey dynamic facial expressions (chewing and fear) as well as scrambled versions to both species. We studied lateralization in independently defined face-responsive and face-selective regions by calculating a weighted lateralization index (LIwm) using a bootstrapping method. In order to examine if lateralization in humans is related to language, we performed a separate fMRI experiment in ten human volunteers including a 'speech' expression (one syllable non-word) and its scrambled version. Both within face-responsive and selective regions, we found consistent lateralization for dynamic faces (chewing and fear) versus scrambled versions in the right human posterior superior temporal sulcus (pSTS), but not in FFA nor in ventral temporal cortex. Conversely, in monkeys no consistent pattern of lateralization for dynamic facial expressions was observed. Finally, LIwms based on the contrast between different types of dynamic facial expressions (relative to scrambled versions) revealed left-sided lateralization in human pSTS for speech-related expressions compared to chewing and emotional expressions. To conclude, we found consistent laterality effects in human posterior STS but not

  15. Automated and objective action coding of facial expressions in patients with acute facial palsy.

    PubMed

    Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando

    2015-05-01

    Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.

  16. Emotional Representation in Facial Expression and Script: A Comparison between Normal and Autistic Children

    ERIC Educational Resources Information Center

    Balconi, Michela; Carrera, Alba

    2007-01-01

    The paper explored conceptual and lexical skills with regard to emotional correlates of facial stimuli and scripts. In two different experimental phases normal and autistic children observed six facial expressions of emotions (happiness, anger, fear, sadness, surprise, and disgust) and six emotional scripts (contextualized facial expressions). In…

  17. More emotional facial expressions during episodic than during semantic autobiographical retrieval.

    PubMed

    El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis

    2016-04-01

    There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.

  18. Body Actions Change the Appearance of Facial Expressions

    PubMed Central

    Fantoni, Carlo; Gerbino, Walter

    2014-01-01

    Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant’s global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience. PMID:25251882

  19. Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

    PubMed

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion. PMID:25438312

  20. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  1. Toward a dialect theory: cultural differences in the expression and recognition of posed facial expressions.

    PubMed

    Elfenbein, Hillary Anger; Beaupré, Martin; Lévesque, Manon; Hess, Ursula

    2007-02-01

    Two studies provided direct support for a recently proposed dialect theory of communicating emotion, positing that expressive displays show cultural variations similar to linguistic dialects, thereby decreasing accurate recognition by out-group members. In Study 1, 60 participants from Quebec and Gabon posed facial expressions. Dialects, in the form of activating different muscles for the same expressions, emerged most clearly for serenity, shame, and contempt and also for anger, sadness, surprise, and happiness, but not for fear, disgust, or embarrassment. In Study 2, Quebecois and Gabonese participants judged these stimuli and stimuli standardized to erase cultural dialects. As predicted, an in-group advantage emerged for nonstandardized expressions only and most strongly for expressions with greater regional dialects, according to Study 1.

  2. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    PubMed

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. PMID:25531122

  3. QUANTIFYING ATYPICALITY IN AFFECTIVE FACIAL EXPRESSIONS OF CHILDREN WITH AUTISM SPECTRUM DISORDERS.

    PubMed

    Metallinou, Angeliki; Grossman, Ruth B; Narayanan, Shrikanth

    2013-01-01

    We focus on the analysis, quantification and visualization of atypicality in affective facial expressions of children with High Functioning Autism (HFA). We examine facial Motion Capture data from typically developing (TD) children and children with HFA, using various statistical methods, including Functional Data Analysis, in order to quantify atypical expression characteristics and uncover patterns of expression evolution in the two populations. Our results show that children with HFA display higher asynchrony of motion between facial regions, more rough facial and head motion, and a larger range of facial region motion. Overall, subjects with HFA consistently display a wider variability in the expressive facial gestures that they employ. Our analysis demonstrates the utility of computational approaches for understanding behavioral data and brings new insights into the autism domain regarding the atypicality that is often associated with facial expressions of subjects with HFA.

  4. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates.

    PubMed

    Santana, Sharlene E; Dobson, Seth D; Diogo, Rui

    2014-05-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution.

  5. Emotional conflict in facial expression processing during scene viewing: an ERP study.

    PubMed

    Xu, Qiang; Yang, Yaping; Zhang, Entao; Qiao, Fuqiang; Lin, Wenyi; Liang, Ningjian

    2015-05-22

    Facial expressions are fundamental emotional stimuli as they convey important information in social interaction. In everyday life a face always appears in complex context. Scenes which faces are embedded in provided typical visual context. The aim of the present study was to investigate the processing of emotional conflict between facial expressions and emotional scenes by recording event-related potentials (ERPs). We found that when the scene was presented before the face-scene compound stimulus, the scene had an influence on facial expression processing. Specifically, emotionally incongruent (in conflict) face-scene compound stimuli elicited larger fronto-central N2 amplitude relative to the emotionally congruent face-scene compound stimuli. The effect occurred in the post-perceptual stage of facial expression processing and reflected emotional conflict monitoring between emotional scenes and facial expressions. The present findings emphasized the importance of emotional scenes as a context factor in the study of the processing of facial expressions. PMID:25747865

  6. Emotional conflict in facial expression processing during scene viewing: an ERP study.

    PubMed

    Xu, Qiang; Yang, Yaping; Zhang, Entao; Qiao, Fuqiang; Lin, Wenyi; Liang, Ningjian

    2015-05-22

    Facial expressions are fundamental emotional stimuli as they convey important information in social interaction. In everyday life a face always appears in complex context. Scenes which faces are embedded in provided typical visual context. The aim of the present study was to investigate the processing of emotional conflict between facial expressions and emotional scenes by recording event-related potentials (ERPs). We found that when the scene was presented before the face-scene compound stimulus, the scene had an influence on facial expression processing. Specifically, emotionally incongruent (in conflict) face-scene compound stimuli elicited larger fronto-central N2 amplitude relative to the emotionally congruent face-scene compound stimuli. The effect occurred in the post-perceptual stage of facial expression processing and reflected emotional conflict monitoring between emotional scenes and facial expressions. The present findings emphasized the importance of emotional scenes as a context factor in the study of the processing of facial expressions.

  7. Speed and accuracy of facial expression classification in avoidant personality disorder: a preliminary study.

    PubMed

    Rosenthal, M Zachary; Kim, Kwanguk; Herr, Nathaniel R; Smoski, Moria J; Cheavens, Jennifer S; Lynch, Thomas R; Kosson, David S

    2011-10-01

    The aim of this preliminary study was to examine whether individuals with avoidant personality disorder (APD) could be characterized by deficits in the classification of dynamically presented facial emotional expressions. Using a community sample of adults with APD (n = 17) and non-APD controls (n = 16), speed and accuracy of facial emotional expression recognition was investigated in a task that morphs facial expressions from neutral to prototypical expressions (Multi-Morph Facial Affect Recognition Task; Blair, Colledge, Murray, & Mitchell, 2001). Results indicated that individuals with APD were significantly more likely than controls to make errors when classifying fully expressed fear. However, no differences were found between groups in the speed to correctly classify facial emotional expressions. The findings are some of the first to investigate facial emotional processing in a sample of individuals with APD and point to an underlying deficit in processing social cues that may be involved in the maintenance of APD. PMID:22448805

  8. Active and dynamic information fusion for facial expression understanding from image sequences.

    PubMed

    Zhang, Yongmian; Ji, Qiang

    2005-05-01

    This paper explores the use of multisensory information fusion technique with Dynamic Bayesian networks (DBNs) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBNs with Ekman's Facial Action Coding System (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.

  9. A study of patient facial expressivity in relation to orthodontic/surgical treatment.

    PubMed

    Nafziger, Y J

    1994-09-01

    A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip

  10. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. PMID:27425385

  11. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions.

  12. Social phobics do not misinterpret facial expression of emotion.

    PubMed

    Philippot, Pierre; Douilliez, Céline

    2005-05-01

    Attentional biases in the processing of threatening facial expressions in social anxiety are well documented. It is generally assumed that these attentional biases originate in an evaluative bias: socially threatening information would be evaluated more negatively by socially anxious individuals. However, three studies have failed to evidence a negative evaluative bias in the processing of emotional facial expression (EFE) in socially anxious individuals. These studies however suffer from several methodological limitations that the present study has attempted to overcome. Twenty-one out-patients diagnosed with generalized social phobia have been compared to 20 out-patients diagnosed with another anxiety disorder and with 39 normal controls matched for gender, age and level of education. They had to decode on seven emotion intensity scales a set of 40 EFE whose intensity and emotional nature were manipulated. Although sufficient statistical power was ensured, no differences among groups could be found in terms of decoding accuracy, attributed emotion intensity, or reported difficulty of the task. Based on these findings as well as on other evidences, we propose that, if they exist, evaluative biases in social anxiety should be implicit and automatic and that they might be determined by the relevance of the stimulus to the person's concern rather than by the stimulus valence. The implications of these findings for the interpersonal processes involved in social phobia are discussed. PMID:15865918

  13. Differential priming effect for subliminal fear and disgust facial expressions.

    PubMed

    Lee, Su Young; Kang, Jee In; Lee, Eun; Namkoong, Kee; An, Suk Kyoon

    2011-02-01

    Compared to neutral or happy stimuli, subliminal fear stimuli are known to be well processed through the automatic pathway. We tried to examine whether fear stimuli could be processed more strongly than other negative emotional stimuli using a modified subliminal affective priming paradigm. Twenty-six healthy subjects participated in two separated sessions. Fear, disgust and neutral facial expressions were adopted as primes, and 50% happy facial stimuli were adopted as a target to let only stronger negative primes reveal a priming effect. Participants were asked to appraise the affect of target faces in the affect appraisal session and to appraise the genuineness of target faces in the genuineness appraisal session. The genuineness instruction was developed to help participants be sensitive to potential threats. In the affect appraisal, participants judged 50% happy target faces significantly more 'unpleasant' when they were primed by fear faces than primed by 50% happy control faces. In the genuineness appraisal, participants judged targets significantly more 'not genuine' when they were primed by fear and disgust faces than primed by controls. These findings suggest that there may be differential priming effects between subliminal fear and disgust expressions, which could be modulated by a sensitive context of potential threat.

  14. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease

    PubMed Central

    Livingstone, Steven R.; Vezer, Esztella; McGarry, Lucy M.; Lang, Anthony E.; Russo, Frank A.

    2016-01-01

    Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson's disease. Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] −0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI −0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = –0.45, p = 0.02, two-tailed). Conclusions: Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the “masked face” syndrome of PD. PMID:27375505

  15. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-01-01

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions.

  16. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits

    PubMed Central

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-01-01

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081

  17. Affect-specific activation of shared networks for perception and execution of facial expressions.

    PubMed

    Kircher, Tilo; Pohl, Anna; Krach, Sören; Thimm, Markus; Schulte-Rüther, Martin; Anders, Silke; Mathiak, Klaus

    2013-04-01

    Previous studies have shown overlapping neural activations for observation and execution or imitation of emotional facial expressions. These shared representations have been assumed to provide indirect evidence for a human mirror neuron system, which is suggested to be a prerequisite of action comprehension. We aimed at clarifying whether shared representations in and beyond human mirror areas are specifically activated by affective facial expressions or whether they are activated by facial expressions independent of the emotional meaning. During neuroimaging, participants observed and executed happy and non-emotional facial expressions. Shared representations were revealed for happy facial expressions in the pars opercularis, the precentral gyrus, in the superior temporal gyrus/medial temporal gyrus (MTG), in the pre-supplementary motor area and in the right amygdala. All areas showed less pronounced activation in the non-emotional condition. When directly compared, significant stronger neural responses emerged for happy facial expressions in the pre-supplementary motor area and in the MTG than for non-emotional stimuli. We assume that activation of shared representations depends on the affect and (social) relevance of the facial expression. The pre-supplementary motor area is a core-shared representation-structure supporting observation and execution of affective contagious facial expressions and might have a modulatory role during the preparation of executing happy facial expressions.

  18. Processing of facial expressions of emotions by adults with Down syndrome and moderate intellectual disability.

    PubMed

    Carvajal, Fernando; Fernández-Alcaraz, Camino; Rueda, María; Sarrión, Louise

    2012-01-01

    The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an original task in which they had to match facial expressions after observing the complete face or one of its halves. Adults with Down syndrome did not show any specific difficulties in recognizing facial expressions in spite of showing a poorer discrimination between facial expressions and tended to take more notice of the lower half of the face. PMID:22240141

  19. Is there a dynamic advantage for facial expressions?

    PubMed

    Fiorentini, Chiara; Viviani, Paolo

    2011-03-22

    Some evidence suggests that it is easier to identify facial expressions (FEs) shown as dynamic displays than as photographs (dynamic advantage hypothesis). Previously, this has been tested by using dynamic FEs simulated either by morphing a neutral face into an emotional one or by computer animations. For the first time, we tested the dynamic advantage hypothesis by using high-speed recordings of actors' FEs. In the dynamic condition, stimuli were graded blends of two recordings (duration: 4.18 s), each describing the unfolding of an expression from neutral to apex. In the static condition, stimuli (duration: 3 s) were blends of just the apex of the same recordings. Stimuli for both conditions were generated by linearly morphing one expression into the other. Performance was estimated by a forced-choice task asking participants to identify which prototype the morphed stimulus was more similar to. Identification accuracy was not different between conditions. Response times (RTs) measured from stimulus onset were shorter for static than for dynamic stimuli. Yet, most responses to dynamic stimuli were given before expressions reached their apex. Thus, with a threshold model, we tested whether discriminative information is integrated more effectively in dynamic than in static conditions. We did not find any systematic difference. In short, neither identification accuracy nor RTs supported the dynamic advantage hypothesis.

  20. Real time facial expression recognition from image sequences using support vector machines

    NASA Astrophysics Data System (ADS)

    Kotsia, I.; Pitas, I.

    2005-07-01

    In this paper, a real-time method is proposed as a solution to the problem of facial expression classi cation in video sequences. The user manually places some of the Candide grid nodes to the face depicted at the rst frame. The grid adaptation system, based on deformable models, tracks the entire Candide grid as the facial expression evolves through time, thus producing a grid that corresponds to the greatest intensity of the facial expression, as shown at the last frame. Certain points that are involved into creating the Facial Action Units movements are selected. Their geometrical displacement information, de ned as the coordinates' dierence between the last and the rst frame, is extracted to be the input to a six class Support Vector Machine system. The output of the system is the facial expression recognized. The proposed real-time system, recognizes the 6 basic facial expressions with an approximately 98% accuracy.

  1. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.

    2015-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…

  2. Female Preschoolers' Verbal and Nonverbal Empathic Responses to Emotional Situations and Facial Expressions.

    ERIC Educational Resources Information Center

    Wiggers, Michiel; Willems, Henk

    1983-01-01

    Contrasts several conceptualizations about the interdependency of three empathy responses in kindergarten children: cognitive (understanding), affective (sharing), and facial empathy (facially expressing another's affect). Results corroborated a conceptualization in which affective and facial empathy were mediated by cognitive empathy. (Author/RH)

  3. High frequency of facial expressions corresponding to confusion, concentration, and worry in an analysis of naturally occurring facial expressions of Americans.

    PubMed

    Rozin, Paul; Cohen, Adam B

    2003-03-01

    College students were instructed to observe symmetric and asymmetric facial expressions and to report the target's judgment of the "emotion" she or he was expressing, the facial movements involved, and the more expressive side. For both asymmetric and symmetric expressions, some of the most common emotions or states reported are neither included in standard taxonomies of emotion nor studied as important signals. Confusion is the most common descriptor reported for asymmetric expressions and is commonly reported for symmetrical expressions as well. Other frequent descriptors were think-concentrate and worry. Confusion is characterized principally by facial movements around the eyes and has many properties usually attributed to emotions. There was no evidence for lateralization of positive versus negative valenced states.

  4. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    PubMed Central

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally

  5. Facial expressions in common marmosets (Callithrix jacchus) and their use by conspecifics.

    PubMed

    Kemp, Caralyn; Kaplan, Gisela

    2013-09-01

    Facial expressions have been studied mainly in chimpanzees and have been shown to be important social signals. In platyrrhine and strepsirrhine primates, it has been doubted that facial expressions are differentiated enough, or the species socially capable enough, for facial expressions to be part of their communication system. However, in a series of experiments presenting olfactory, auditory and visual stimuli, we found that common marmosets (Callithrix jacchus) displayed an unexpected variety of facial expressions. Especially, olfactory and auditory stimuli elicited obvious facial displays (such as disgust), some of which are reported here for the first time. We asked whether specific facial responses to food and predator-related stimuli might act as social signals to conspecifics. We recorded two contrasting facial expressions (fear and pleasure) as separate sets of video clips and then presented these to cage mates of those marmosets shown in the images, while tempting the subject with food. Results show that the expression of a fearful face on screen significantly reduced time spent near the food bowl compared to the duration when a face showing pleasure was screened. This responsiveness to a cage mate's facial expressions suggests that the evolution of facial signals may have occurred much earlier in primate evolution than had been thought. PMID:23412667

  6. Facial expression of positive emotions in individuals with eating disorders.

    PubMed

    Dapelo, Marcela M; Hart, Sharon; Hale, Christiane; Morris, Robin; Lynch, Thomas R; Tchanturia, Kate

    2015-11-30

    A large body of research has associated Eating Disorders with difficulties in socio-emotional functioning and it has been argued that they may serve to maintain the illness. This study aimed to explore facial expressions of positive emotions in individuals with Anorexia Nervosa (AN) and Bulimia Nervosa (BN) compared to healthy controls (HC), through an examination of the Duchenne smile (DS), which has been associated with feelings of enjoyment, amusement and happiness (Ekman et al., 1990). Sixty participants (AN=20; BN=20; HC=20) were videotaped while watching a humorous film clip. The duration and intensity of DS were subsequently analyzed using the facial action coding system (FACS) (Ekman and Friesen, 2003). Participants with AN displayed DS for shorter durations than BN and HC participants, and their DS had lower intensity. In the clinical groups, lower duration and intensity of DS were associated with lower BMI, and use of psychotropic medication. The study is the first to explore DS in people with eating disorders, providing further evidence of difficulties in the socio-emotional domain in people with AN.

  7. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process.

  8. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  9. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  10. Evaluating Posed and Evoked Facial Expressions of Emotion from Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Faso, Daniel J.; Sasson, Noah J.; Pinkham, Amy E.

    2015-01-01

    Though many studies have examined facial affect perception by individuals with autism spectrum disorder (ASD), little research has investigated how facial expressivity in ASD is perceived by others. Here, naïve female observers (n = 38) judged the intensity, naturalness and emotional category of expressions produced by adults with ASD (n = 6) and…

  11. Mu desynchronization during observation and execution of facial expressions in 30-month-old children.

    PubMed

    Rayson, Holly; Bonaiuto, James John; Ferrari, Pier Francesco; Murray, Lynne

    2016-06-01

    Simulation theories propose that observing another's facial expression activates sensorimotor representations involved in the execution of that expression, facilitating recognition processes. The mirror neuron system (MNS) is a potential mechanism underlying simulation of facial expressions, with like neural processes activated both during observation and performance. Research with monkeys and adult humans supports this proposal, but so far there have been no investigations of facial MNS activity early in human development. The current study used electroencephalography (EEG) to explore mu rhythm desynchronization, an index of MNS activity, in 30-month-old children as they observed videos of dynamic emotional and non-emotional facial expressions, as well as scrambled versions of the same videos. We found significant mu desynchronization in central regions during observation and execution of both emotional and non-emotional facial expressions, which was right-lateralized for emotional and bilateral for non-emotional expressions during observation. These findings support previous research suggesting movement simulation during observation of facial expressions, and are the first to provide evidence for sensorimotor activation during observation of facial expressions, consistent with a functioning facial MNS at an early stage of human development.

  12. Spontaneous Facial Expressions in Congenitally Blind and Sighted Children Aged 8-11.

    ERIC Educational Resources Information Center

    Galati, Dario; Sini, Barbara; Schmidt, Susanne; Tinti, Carla

    2003-01-01

    This study found that the emotional facial expressions of 10 congenitally blind and 10 sighted children, ages 8-11, were similar. However, the frequency of certain facial movements was higher in the blind children than in the sighted children, and social influences were evident only in the expressions of the sighted children, who often masked…

  13. Compound facial expressions of emotion: from basic research to clinical applications

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2015-01-01

    Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions. PMID:26869845

  14. Clothing Color Value and Facial Expression: Effects on Evaluations of Female Job Applicants.

    ERIC Educational Resources Information Center

    Damhorst, Mary Lynn D.; Reed, J. Ann Pinaire

    1986-01-01

    Color value of clothing and facial expression were varied in photographs of six female job applicants. Male and female business persons (N=208) judged the photographs. Facial expression significantly affected evaluations of Character-Sociability characteristics. Clothing color value influenced perceptions of Potency, only for male interviewers.…

  15. Mu desynchronization during observation and execution of facial expressions in 30-month-old children.

    PubMed

    Rayson, Holly; Bonaiuto, James John; Ferrari, Pier Francesco; Murray, Lynne

    2016-06-01

    Simulation theories propose that observing another's facial expression activates sensorimotor representations involved in the execution of that expression, facilitating recognition processes. The mirror neuron system (MNS) is a potential mechanism underlying simulation of facial expressions, with like neural processes activated both during observation and performance. Research with monkeys and adult humans supports this proposal, but so far there have been no investigations of facial MNS activity early in human development. The current study used electroencephalography (EEG) to explore mu rhythm desynchronization, an index of MNS activity, in 30-month-old children as they observed videos of dynamic emotional and non-emotional facial expressions, as well as scrambled versions of the same videos. We found significant mu desynchronization in central regions during observation and execution of both emotional and non-emotional facial expressions, which was right-lateralized for emotional and bilateral for non-emotional expressions during observation. These findings support previous research suggesting movement simulation during observation of facial expressions, and are the first to provide evidence for sensorimotor activation during observation of facial expressions, consistent with a functioning facial MNS at an early stage of human development. PMID:27261926

  16. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  17. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    PubMed

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  18. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    ERIC Educational Resources Information Center

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  19. Compound facial expressions of emotion: from basic research to clinical applications.

    PubMed

    Du, Shichuan; Martinez, Aleix M

    2015-12-01

    Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions.

  20. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  1. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    PubMed

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  2. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  3. Preschooler's Faces in Spontaneous Emotional Contexts--How Well Do They Match Adult Facial Expression Prototypes?

    ERIC Educational Resources Information Center

    Gaspar, Augusta; Esteves, Francisco G.

    2012-01-01

    Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…

  4. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    PubMed

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  5. Selective attention and facial expression recognition in patients with Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Serrano, Juan M; Martín, Pilar

    2014-06-01

    Parkinson's disease (PD) has been associated with facial expression recognition difficulties. However, this impairment could be secondary to the one produced in other cognitive processes involved in recognition, such as selective attention. This study investigates the influence of two selective attention components (inhibition and visual search) on facial expression recognition in PD. We compared facial expression and non-emotional stimuli recognition abilities of 51 patients and 51 healthy controls, by means of an adapted Stroop task, and by "The Face in the Crowd" paradigm, which assess Inhibition and Visual Search abilities, respectively. Patients scored worse than controls in both tasks with facial expressions, but not with the other nonemotional stimuli, indicating specific emotional recognition impairment, not dependent on selective attention abilities. This should be taken into account in patients' neuropsychological assessment given the relevance of emotional facial expression for social communication in everyday settings. PMID:24760956

  6. The amygdalo-motor pathways and the control of facial expressions

    PubMed Central

    Gothard, Katalin M.

    2013-01-01

    Facial expressions reflect decisions about the perceived meaning of social stimuli and the expected socio-emotional outcome of responding (or not) with a reciprocating expression. The decision to produce a facial expression emerges from the joint activity of a network of structures that include the amygdala and multiple, interconnected cortical and subcortical motor areas. Reciprocal transformations between these sensory and motor signals give rise to distinct brain states that promote, or impede the production of facial expressions. The muscles of the upper and lower face are controlled by anatomically distinct motor areas. Facial expressions engage to a different extent the lower and upper face and thus require distinct patterns of neural activity distributed across multiple facial motor areas in ventrolateral frontal cortex, the supplementary motor area, and two areas in the midcingulate cortex. The distributed nature of the decision manifests in the joint activation of multiple motor areas that initiate the production of facial expression. Concomitantly multiple areas, including the amygdala, monitor ongoing overt behaviors (the expression itself) and the covert, autonomic responses that accompany emotional expressions. As the production of facial expressions is brought into the framework of formal decision making, an important challenge will be to incorporate autonomic and visceral states into decisions that govern the receiving-emitting cycle of social signals. PMID:24678289

  7. [Emotional intelligence and oscillatory responses on the emotional facial expressions].

    PubMed

    Kniazev, G G; Mitrofanova, L G; Bocharov, A V

    2013-01-01

    Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women) in age 18-30 years. Participants were instructed to evaluate emotional expression (angry, happy and neutral) of each presented face on an analog scale ranging from -100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500-870 ms) event-related theta synchronization in high emotional intelligence subject was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon presentation of angry faces. This suggests the existence of a mechanism that can be selectively increase the positive emotions and reduce negative emotions.

  8. Standardized mood induction with happy and sad facial expressions.

    PubMed

    Schneider, F; Gur, R C; Gur, R E; Muenz, L R

    1994-01-01

    The feasibility of applying ecologically valid and socially relevant emotional stimuli in a standardized fashion to obtain reliable mood changes in healthy subjects was examined. The stimuli consisted of happy and sad facial expressions varying in intensity. Two mood-induction procedures (happy and sad, each consisting of 40 slides) were administered to 24 young healthy subjects, who were instructed to look at each slide (self-paced) and try to feel the happy or sad mood expressed by the person in the picture. On an emotional self-rating scale, subjects rated themselves as relatively happier during the happy mood-induction condition and as relatively sadder during the sad mood-induction condition. Conversely, they reported that they were less happy during the sad mood-induction condition and less sad during the happy mood-induction condition. The effects were generalized to positive and negative affect as measured by the Positive and Negative Affect Scale. The intraindividual variability in the effect was very small. In a retest study after 1 month, the mood-induction effects showed good stability over time. The results encourage the use of this mood-induction procedure as a neurobehavioral probe in physiologic neuroimaging studies for investigating the neural substrates of emotional experience.

  9. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    NASA Astrophysics Data System (ADS)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  10. A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia.

    PubMed

    Humphreys, Kate; Avidan, Galia; Behrmann, Marlene

    2007-01-01

    Whether the ability to recognize facial expression can be preserved in the absence of the recognition of facial identity remains controversial. The current study reports the results of a detailed investigation of facial expression recognition in three congenital prosopagnosic (CP) participants, in comparison with two patients with acquired prosopagnosia (AP) and a large group of 30 neurologically normal participants, including individually age- and gender-matched controls. Participants completed a fine-grained expression recognition paradigm requiring a six-alternative forced-choice response to continua of morphs of six different basic facial expressions (e.g. happiness and surprise). Accuracy, sensitivity and reaction times were measured. The performance of all three CP individuals was indistinguishable from that of controls, even for the most subtle expressions. In contrast, both individuals with AP displayed pronounced difficulties with the majority of expressions. The results from the CP participants attest to the dissociability of the processing of facial identity and of facial expression. Whether this remarkably good expression recognition is achieved through normal, or compensatory, mechanisms remains to be determined. Either way, this normal level of performance does not extend to include facial identity. PMID:16917773

  11. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    PubMed

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  12. Effect of facial expressions on student's comprehension recognition in virtual educational environments.

    PubMed

    Sathik, Mohamed; Jonathan, Sofia G

    2013-01-01

    The scope of this research is to examine whether facial expression of the students is a tool for the lecturer to interpret comprehension level of students in virtual classroom and also to identify the impact of facial expressions during lecture and the level of comprehension shown by these expressions. Our goal is to identify physical behaviours of the face that are linked to emotional states, and then to identify how these emotional states are linked to student's comprehension. In this work, the effectiveness of a student's facial expressions in non-verbal communication in a virtual pedagogical environment was investigated first. Next, the specific elements of learner's behaviour for the different emotional states and the relevant facial expressions signaled by the action units were interpreted. Finally, it focused on finding the impact of the relevant facial expression on the student's comprehension. Experimentation was done through survey, which involves quantitative observations of the lecturers in the classroom in which the behaviours of students were recorded and statistically analyzed. The result shows that facial expression is the most frequently used nonverbal communication mode by the students in the virtual classroom and facial expressions of the students are significantly correlated to their emotions which helps to recognize their comprehension towards the lecture.

  13. Brief report: Representational momentum for dynamic facial expressions in pervasive developmental disorder.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-03-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual's face. PMID:19763805

  14. Brief report: Representational momentum for dynamic facial expressions in pervasive developmental disorder.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-03-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual's face.

  15. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  16. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  17. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  18. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  19. Dynamic stimuli demonstrate a categorical representation of facial expression in the amygdala.

    PubMed

    Harris, Richard J; Young, Andrew W; Andrews, Timothy J

    2014-04-01

    Face-selective regions in the amygdala and posterior superior temporal sulcus (pSTS) are strongly implicated in the processing of transient facial signals, such as expression. Here, we measured neural responses in participants while they viewed dynamic changes in facial expression. Our aim was to explore how facial expression is represented in different face-selective regions. Short movies were generated by morphing between faces posing a neutral expression and a prototypical expression of a basic emotion (either anger, disgust, fear, happiness or sadness). These dynamic stimuli were presented in block design in the following four stimulus conditions: (1) same-expression change, same-identity, (2) same-expression change, different-identity, (3) different-expression change, same-identity, and (4) different-expression change, different-identity. So, within a same-expression change condition the movies would show the same change in expression whereas in the different-expression change conditions each movie would have a different change in expression. Facial identity remained constant during each movie but in the different identity conditions the facial identity varied between each movie in a block. The amygdala, but not the posterior STS, demonstrated a greater response to blocks in which each movie morphed from neutral to a different emotion category compared to blocks in which each movie morphed to the same emotion category. Neural adaptation in the amygdala was not affected by changes in facial identity. These results are consistent with a role of the amygdala in category-based representation of facial expressions of emotion. PMID:24447769

  20. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  1. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults

    PubMed Central

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants. PMID:25610415

  2. Identity recognition and happy and sad facial expression recall: influence of depressive symptoms.

    PubMed

    Jermann, Françoise; van der Linden, Martial; D'Argembeau, Arnaud

    2008-05-01

    Relatively few studies have examined memory bias for social stimuli in depression or dysphoria. The aim of this study was to investigate the influence of depressive symptoms on memory for facial information. A total of 234 participants completed the Beck Depression Inventory II and a task examining memory for facial identity and expression of happy and sad faces. For both facial identity and expression, the recollective experience was measured with the Remember/Know/Guess procedure (Gardiner & Richardson-Klavehn, 2000). The results show no major association between depressive symptoms and memory for identities. However, dysphoric individuals consciously recalled (Remember responses) more sad facial expressions than non-dysphoric individuals. These findings suggest that sad facial expressions led to more elaborate encoding, and thereby better recollection, in dysphoric individuals. PMID:18432481

  3. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  4. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions

    PubMed Central

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions. PMID:26578938

  5. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  6. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions.

    PubMed

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions.

  7. Facial feedback affects valence judgments of dynamic and static emotional expressions.

    PubMed

    Hyniewska, Sylwia; Sato, Wataru

    2015-01-01

    The ability to judge others' emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers' facial actions and emotion judgments have reported mixed findings. This issue was investigated by measuring emotion judgments in terms of valence and arousal dimensions while comparing dynamic vs. static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36) were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering) and zygomaticus major muscle (cheek raising). They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions.

  8. Facial feedback affects valence judgments of dynamic and static emotional expressions.

    PubMed

    Hyniewska, Sylwia; Sato, Wataru

    2015-01-01

    The ability to judge others' emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers' facial actions and emotion judgments have reported mixed findings. This issue was investigated by measuring emotion judgments in terms of valence and arousal dimensions while comparing dynamic vs. static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36) were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering) and zygomaticus major muscle (cheek raising). They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions. PMID:25852608

  9. The role of the analyst's facial expressions in psychoanalysis and psychoanalytic therapy.

    PubMed

    Searles, H F

    This paper, while acknowledging implicitly the importance of transference-distortions in the patient's perceptions of the analyst's countenance, focuses primarily upon the real changes in the latter's facial expressions. The analyst's face has a central role in the phase of therapeutic symbiosis, as well as in subsequent individuation. It is in the realm of the analyst's facial expressions that the borderline patient, for example, can best find a bridge out of autism and into therapeutically symbiotic relatedness with the analyst. During this latter phase, then, each participant's facial expressions "belong" as much to the other as to oneself; that is, the expressions of each person are in the realm of transitional phenomena for both of them. The analyst's facial expressions are a highly, and often centrally, significant dimension of both psychoanalysis and psychoanalytic therapy. Illustrative clinical vignettes are presented from work with both patients who use the couch and those who do not. PMID:6511198

  10. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    NASA Astrophysics Data System (ADS)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  11. The role of the analyst's facial expressions in psychoanalysis and psychoanalytic therapy.

    PubMed

    Searles, H F

    This paper, while acknowledging implicitly the importance of transference-distortions in the patient's perceptions of the analyst's countenance, focuses primarily upon the real changes in the latter's facial expressions. The analyst's face has a central role in the phase of therapeutic symbiosis, as well as in subsequent individuation. It is in the realm of the analyst's facial expressions that the borderline patient, for example, can best find a bridge out of autism and into therapeutically symbiotic relatedness with the analyst. During this latter phase, then, each participant's facial expressions "belong" as much to the other as to oneself; that is, the expressions of each person are in the realm of transitional phenomena for both of them. The analyst's facial expressions are a highly, and often centrally, significant dimension of both psychoanalysis and psychoanalytic therapy. Illustrative clinical vignettes are presented from work with both patients who use the couch and those who do not.

  12. Processing Facial Expressions of Emotion: Upright vs. Inverted Images

    PubMed Central

    Bimler, David L.; Skwarek, Slawomir J.; Paramei, Galina V.

    2012-01-01

    We studied discrimination of briefly presented upright vs. inverted emotional facial expressions (FEs), hypothesizing that inversion would impair emotion decoding by disrupting holistic FE processing. Stimuli were photographs of seven emotion prototypes, of a male and female poser (Ekman and Friesen, 1976), and eight intermediate morphs in each set. Subjects made speeded Same/Different judgments of emotional content for all upright (U) or inverted (I) pairs of FEs, presented for 500 ms, 100 times each pair. Signal Detection Theory revealed the sensitivity measure d′ to be slightly but significantly higher for the upright FEs. In further analysis using multidimensional scaling (MDS), percentages of Same judgments were taken as an index of pairwise perceptual similarity, separately for U and I presentation mode. The outcome was a 4D “emotion expression space,” with FEs represented as points and the dimensions identified as Happy–Sad, Surprise/Fear, Disgust, and Anger. The solutions for U and I FEs were compared by means of cophenetic and canonical correlation, Procrustes analysis, and weighted-Euclidean analysis of individual differences. Differences in discrimination produced by inverting FE stimuli were found to be small and manifested as minor changes in the MDS structure or weights of the dimensions. Solutions differed substantially more between the two posers, however. Notably, for stimuli containing elements of Happiness (whether U or I), the MDS structure showed signs of implicit categorization, indicating that mouth curvature – the dominant feature conveying Happiness – is visually salient and receives early processing. The findings suggest that for briefly presented FEs, Same/Different decisions are dominated by low-level visual analysis of abstract patterns of lightness and edge filters, but also reflect emerging featural analysis. These analyses, insensitive to face orientation, enable initial positive/negative Valence categorization of FEs

  13. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    PubMed

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  14. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    PubMed

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  15. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    PubMed Central

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  16. Processing of Individual Items during Ensemble Coding of Facial Expressions

    PubMed Central

    Li, Huiyun; Ji, Luyan; Tong, Ke; Ren, Naixin; Chen, Wenfeng; Liu, Chang Hong; Fu, Xiaolan

    2016-01-01

    There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 ms for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing. PMID:27656154

  17. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  18. Processing of Individual Items during Ensemble Coding of Facial Expressions

    PubMed Central

    Li, Huiyun; Ji, Luyan; Tong, Ke; Ren, Naixin; Chen, Wenfeng; Liu, Chang Hong; Fu, Xiaolan

    2016-01-01

    There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 ms for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing.

  19. Processing of Individual Items during Ensemble Coding of Facial Expressions.

    PubMed

    Li, Huiyun; Ji, Luyan; Tong, Ke; Ren, Naixin; Chen, Wenfeng; Liu, Chang Hong; Fu, Xiaolan

    2016-01-01

    There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 ms for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing. PMID:27656154

  20. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face.

  1. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    PubMed

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research.

  2. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    PubMed

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. PMID:27318957

  3. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    PubMed

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction.

  4. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.

  5. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    ERIC Educational Resources Information Center

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  6. Influence of Intensity on Children's Sensitivity to Happy, Sad, and Fearful Facial Expressions

    ERIC Educational Resources Information Center

    Gao, Xiaoqing; Maurer, Daphne

    2009-01-01

    Most previous studies investigating children's ability to recognize facial expressions used only intense exemplars. Here we compared the sensitivity of 5-, 7-, and 10-year-olds with that of adults (n = 24 per age group) for less intense expressions of happiness, sadness, and fear. The developmental patterns differed across expressions. For…

  7. Does Facial Expressivity Count? How Typically Developing Children Respond Initially to Children with Autism

    ERIC Educational Resources Information Center

    Stagg, Steven D.; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela

    2014-01-01

    Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for…

  8. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    PubMed

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  9. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter.

  10. French-speaking children’s freely produced labels for facial expressions

    PubMed Central

    Maassarani, Reem; Gosselin, Pierre; Montembeault, Patricia; Gagnon, Mathieu

    2014-01-01

    In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were coded by two raters who made judgments concerning the specific emotion category in which the responses belonged. 5- and 6-year-olds were quite accurate in labeling facial expressions of happiness, anger, and sadness but far less accurate for facial expressions of fear, surprise, and disgust. An improvement in accuracy as a function of age was found for fear and surprise only. Labeling facial expressions of disgust proved to be very difficult for the children, even for the 11-year-olds. In order to examine the fit between the model proposed by Widen and Russell (2003) and our data, we looked at the number of participants who had the predicted response patterns. Overall, 88.52% of the participants did. Most of the participants used between 3 and 5 labels, with correspondence percentages varying between 80.00% and 100.00%. Our results suggest that the model proposed by Widen and Russell (2003) is not limited to English-speaking children, but also accounts for the sequence of emotion labeling in French-Canadian children. PMID:24926281

  11. Revealing variations in perception of mental states from dynamic facial expressions: a cautionary note.

    PubMed

    Back, Elisa; Jordan, Timothy R

    2014-01-01

    Although a great deal of research has been conducted on the recognition of basic facial emotions (e.g., anger, happiness, sadness), much less research has been carried out on the more subtle facial expressions of an individual's mental state (e.g., anxiety, disinterest, relief). Of particular concern is that these mental state expressions provide a crucial source of communication in everyday life but little is known about the accuracy with which natural dynamic facial expressions of mental states are identified and, in particular, the variability in mental state perception that is produced. Here we report the findings of two studies that investigated the accuracy and variability with which dynamic facial expressions of mental states were identified by participants. Both studies used stimuli carefully constructed using procedures adopted in previous research, and free-report (Study 1) and forced-choice (Study 2) measures of response accuracy and variability. The findings of both studies showed levels of response accuracy that were accompanied by substantial variation in the labels assigned by observers to each mental state. Thus, when mental states are identified from facial expressions in experiments, the identities attached to these expressions appear to vary considerably across individuals. This variability raises important issues for understanding the identification of mental states in everyday situations and for the use of responses in facial expression research.

  12. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  13. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  14. Impaired recognition of prosody and subtle emotional facial expressions in Parkinson's disease.

    PubMed

    Buxton, Sharon L; MacDonald, Lorraine; Tippett, Lynette J

    2013-04-01

    Accurately recognizing the emotional states of others is crucial for successful social interactions and social relationships. Individuals with Parkinson's disease (PD) have shown deficits in emotional recognition abilities although findings have been inconsistent. This study examined recognition of emotions from prosody and from facial emotional expressions with three levels of subtlety, in 30 individuals with PD (without dementia) and 30 control participants. The PD group were impaired on the prosody task, with no differential impairments in specific emotions. PD participants were also impaired at recognizing facial expressions of emotion, with a significant association between how well they could recognize emotions in the two modalities, even after controlling for disease severity. When recognizing facial expressions, the PD group had no difficulty identifying prototypical Ekman and Friesen (1976) emotional faces, but were poorer than controls at recognizing the moderate and difficult levels of subtle expressions. They were differentially impaired at recognizing moderately subtle expressions of disgust and sad expressions at the difficult level. Notably, however, they were impaired at recognizing happy expressions at both levels of subtlety. Furthermore how well PD participants identified happy expressions conveyed by either face or voice was strongly related to accuracy in the other modality. This suggests dysfunction of overlapping components of the circuitry processing happy expressions in PD. This study demonstrates the usefulness of including subtle expressions of emotion, likely to be encountered in everyday life, when assessing recognition of facial expressions.

  15. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    PubMed

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  16. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity – Evidence from Gazing Patterns

    PubMed Central

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V.; Hänninen, Laura; Krause, Christina M.; Vainio, Outi

    2016-01-01

    Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  17. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  18. Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials.

    PubMed

    Recio, Guillermo; Schacht, Annekathrin; Sommer, Werner

    2014-02-01

    Emotional facial expressions usually arise dynamically from a neutral expression. Yet, most previous research focused on static images. The present study investigated basic aspects of processing dynamic facial expressions. In two experiments, we presented short videos of facial expressions of six basic emotions and non-emotional facial movements emerging at variable and fixed rise times, attaining different intensity levels. In event-related brain potentials (ERP), effects of emotion but also for non-emotional movements appeared as early posterior negativity (EPN) between 200 and 350ms, suggesting an overall facilitation of early visual encoding for all facial movements. These EPN effects were emotion-unspecific. In contrast, relative to happiness and neutral expressions, negative emotional expressions elicited larger late positive ERP components (LPCs), indicating a more elaborate processing. Both EPN and LPC amplitudes increased with expression intensity. Effects of emotion and intensity were additive, indicating that intensity (understood as the degree of motion) increases the impact of emotional expressions but not its quality. These processes can be driven by all basic emotions, and there is little emotion-specificity even when statistical power is considerable (N (Experiment 2)=102). PMID:24361701

  19. Interpreting text messages with graphic facial expression by deaf and hearing people

    PubMed Central

    Saegusa, Chihiro; Namatame, Miki; Watanabe, Katsumi

    2015-01-01

    In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says “yes” with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., “no” tended to mean “no” irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people. PMID:25883582

  20. Interpreting text messages with graphic facial expression by deaf and hearing people.

    PubMed

    Saegusa, Chihiro; Namatame, Miki; Watanabe, Katsumi

    2015-01-01

    In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says "yes" with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., "no" tended to mean "no" irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people. PMID:25883582

  1. Interpreting text messages with graphic facial expression by deaf and hearing people.

    PubMed

    Saegusa, Chihiro; Namatame, Miki; Watanabe, Katsumi

    2015-01-01

    In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says "yes" with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., "no" tended to mean "no" irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people.

  2. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms. PMID:26212348

  3. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. PMID:26915331

  4. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc.

  5. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  6. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  7. Electromyographic Responses to Emotional Facial Expressions in 6-7 Year Olds with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.

    2015-01-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…

  8. Impairment of emotional facial expression and prosody discrimination due to ischemic cerebellar lesions.

    PubMed

    Adamaszek, M; D'Agata, F; Kirkby, K C; Trenner, M U; Sehm, B; Steele, C J; Berneiser, J; Strecker, K

    2014-06-01

    A growing literature points to a specific role of the cerebellum in affect processing. However, understanding of affect processing disturbances following discrete cerebellar lesions is limited. We administered the Tübingen Affect Battery to assess recognition of emotional facial expression and emotional prosody in 15 patients with a cerebellar infarction and 10 age-matched controls. On emotional facial expression tasks, patients compared to controls showed impaired selection and matching of facial affect. On prosody tasks, patients showed marked impairments in naming affect and discriminating incongruencies. These deficits were more pronounced for negative affects. Our results confirm a significant role of the cerebellum in processing emotional recognition, a component of social cognition.

  9. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  10. Discriminating facial expressions of emotion and its link with perceiving visual form in Parkinson's disease.

    PubMed

    Marneweck, Michelle; Hammond, Geoff

    2014-11-15

    We investigated the link between the ability to perceive facial expressions of emotion and the ability to perceive visual form in Parkinson's disease (PD). We assessed in individuals with PD and healthy controls the ability to discriminate graded intensities of facial expressions of anger from neutral expressions and the ability to discriminate radial frequency (RF) patterns with modulations in amplitude from a perfect circle. Those with PD were, as a group, impaired relative to controls in discriminating graded intensities of angry from neutral expressions and discriminating modulated amplitudes of RF patterns from perfect circles; these two abilities correlated positively and moderately to highly, even after removing the variance that was shared with disease progression and general cognitive functioning. The results indicate that the impaired ability to perceive visual form is likely to contribute to the impaired ability to perceive facial expressions of emotion in PD, and that both are related to the progression of the disease. PMID:25179875

  11. Synthesis of Facial Image with Expression Based on Muscular Contraction Parameters Using Linear Muscle and Sphincter Muscle

    NASA Astrophysics Data System (ADS)

    Ahn, Seonju; Ozawa, Shinji

    We aim to synthesize individual facial image with expression based on muscular contraction parameters. We have proposed a method of calculating the muscular contraction parameters from arbitrary face image without using learning for each individual. As a result, we could generate not only individual facial expression, but also the facial expressions of various persons. In this paper, we propose the muscle-based facial model; the facial muscles define both the linear and the novel sphincter. Additionally, we propose a method of synthesizing individual facial image with expression based on muscular contraction parameters. First, the individual facial model with expression is generated by fitting using the arbitrary face image. Next, the muscular contraction parameters are calculated that correspond to the expression displacement of the input face image. Finally, the facial expression is synthesized by the vertex displacements of a neutral facial model based on calculated muscular contraction parameters. Experimental results reveal that the novel sphincter muscle can synthesize facial expressions of the facial image, which corresponds to the actual face image with arbitrary and mouth or eyes expression.

  12. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    PubMed

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces.

  13. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  14. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  15. Features classification using support vector machine for a facial expression recognition system

    NASA Astrophysics Data System (ADS)

    Patil, Rajesh A.; Sahula, Vineet; Mandal, Atanendu S.

    2012-10-01

    A methodology for automatic facial expression recognition in image sequences is proposed, which makes use of the Candide wire frame model and an active appearance algorithm for tracking, and support vector machine (SVM) for classification. A face is detected automatically from the given image sequence and by adapting the Candide wire frame model properly on the first frame of face image sequence, facial features in the subsequent frames are tracked using an active appearance algorithm. The algorithm adapts the Candide wire frame model to the face in each of the frames and then automatically tracks the grid in consecutive video frames over time. We require that first frame of the image sequence corresponds to the neutral facial expression, while the last frame of the image sequence corresponds to greatest intensity of facial expression. The geometrical displacement of Candide wire frame nodes, defined as the difference of the node coordinates between the first and the greatest facial expression intensity frame, is used as an input to the SVM, which classify the facial expression into one of the classes viz happy, surprise, sadness, anger, disgust, and fear.

  16. Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M

    2014-09-01

    Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. PMID:23992026

  17. Inversion effects reveal dissociations in facial expression of emotion, gender, and object processing

    PubMed Central

    Pallett, Pamela M.; Meng, Ming

    2015-01-01

    To distinguish between high-level visual processing mechanisms, the degree to which holistic processing is involved in facial identity, facial expression, and object perception is often examined through measuring inversion effects. However, participants may be biased by different experimental paradigms to use more or less holistic processing. Here we take a novel psychophysical approach to directly compare human face and object processing in the same experiment, with face processing broken into two categories: variant properties and invariant properties as they were tested using facial expressions of emotion and gender, respectively. Specifically, participants completed two different perceptual discrimination tasks. One involved making judgments of stimulus similarity and the other tested the ability to detect differences between stimuli. Each task was completed for both upright and inverted stimuli. Results show significant inversion effects for the detection of differences in facial expressions of emotion and gender, but not for objects. More interestingly, participants exhibited a selective inversion deficit when making similarity judgments between different facial expressions of emotion, but not for gender or objects. These results suggest a three-way dissociation between facial expression of emotion, gender, and object processing. PMID:26283983

  18. Express

    Integrated Risk Information System (IRIS)

    Express ; CASRN 101200 - 48 - 0 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Effect

  19. Perceptual, Categorical, and Affective Processing of Ambiguous Smiling Facial Expressions

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri

    2012-01-01

    Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…

  20. The Role of Facial Expressions in Attention-Orienting in Adults and Infants

    ERIC Educational Resources Information Center

    Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa

    2013-01-01

    Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…

  1. Strategies for Perceiving Facial Expressions in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Walsh, Jennifer A.; Vida, Mark D.; Rutherford, M. D.

    2014-01-01

    Rutherford and McIntosh (J Autism Dev Disord 37:187-196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding…

  2. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  3. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing.

  4. Young Infants Match Facial and Vocal Emotional Expressions of Other Infants

    PubMed Central

    Vaillant-Molina, Mariana; Bahrick, Lorraine E.; Flom, Ross

    2013-01-01

    Research has demonstrated that infants recognize emotional expressions of adults in the first half-year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5- and 5-month-old infants heard a series of infant vocal expressions (positive and negative affect) along with side-by-side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5-month-olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5-month-olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face-voice synchrony, temporal or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice. PMID:24302853

  5. Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image.

    PubMed

    Songfan Yang; Bhanu, B

    2012-08-01

    Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data.

  6. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition

    PubMed Central

    de Gelder, Beatrice; Huis in ‘t Veld, Elisabeth M. J.; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST. PMID:26579004

  7. Emotional facial expressions in European-American, Japanese, and Chinese infants.

    PubMed

    Camras, Linda A; Oster, Harriet; Campos, Joseph J; Bakemand, Roger

    2003-12-01

    Charles Darwin was among the first to recognize the important contribution that infant studies could make to our understanding of human emotional expression. Noting that infants come to exhibit many emotions, he also observed that at first their repertoire of expression is highly restricted. Today, considerable controversy exists regarding the question of whether infants experience and express discrete emotions. According to one position, discrete emotions emerge during infancy along with their prototypic facial expressions. These expressions closely resemble adult emotional expressions and are invariantly concordant with their corresponding emotions. In contrast, we propose that the relation between expression and emotion during infancy is more complex. Some infant emotions and emotional expressions may not be invariantly concordant. Furthermore, infant emotional expressions may be less differentiated than previously proposed. Together with past developmental studies, recent cross-cultural research supports this view and suggests that negative emotional expression in particular is only partly differentiated towards the end of the first year.

  8. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  9. Facial expressions, their communicatory functions and neuro-cognitive substrates.

    PubMed Central

    Blair, R J R

    2003-01-01

    Human emotional expressions serve a crucial communicatory role allowing the rapid transmission of valence information from one individual to another. This paper will review the literature on the neural mechanisms necessary for this communication: both the mechanisms involved in the production of emotional expressions and those involved in the interpretation of the emotional expressions of others. Finally, reference to the neuro-psychiatric disorders of autism, psychopathy and acquired sociopathy will be made. In these conditions, the appropriate processing of emotional expressions is impaired. In autism, it is argued that the basic response to emotional expressions remains intact but that there is impaired ability to represent the referent of the individual displaying the emotion. In psychopathy, the response to fearful and sad expressions is attenuated and this interferes with socialization resulting in an individual who fails to learn to avoid actions that result in harm to others. In acquired sociopathy, the response to angry expressions in particular is attenuated resulting in reduced regulation of social behaviour. PMID:12689381

  10. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    ERIC Educational Resources Information Center

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  11. That "poker face" just might lose you the game! The impact of expressive suppression and mimicry on sensitivity to facial expressions of emotion.

    PubMed

    Schneider, Kristin G; Hempel, Roelie J; Lynch, Thomas R

    2013-10-01

    Successful interpersonal functioning often requires both the ability to mask inner feelings and the ability to accurately recognize others' expressions--but what if effortful control of emotional expressions impacts the ability to accurately read others? In this study, we examined the influence of self-controlled expressive suppression and mimicry on facial affect sensitivity--the speed with which one can accurately identify gradually intensifying facial expressions of emotion. Muscle activity of the brow (corrugator, related to anger), upper lip (levator, related to disgust), and cheek (zygomaticus, related to happiness) were recorded using facial electromyography while participants randomized to one of three conditions (Suppress, Mimic, and No-Instruction) viewed a series of six distinct emotional expressions (happiness, sadness, fear, anger, surprise, and disgust) as they morphed from neutral to full expression. As hypothesized, individuals instructed to suppress their own facial expressions showed impairment in facial affect sensitivity. Conversely, mimicry of emotion expressions appeared to facilitate facial affect sensitivity. Results suggest that it is difficult for a person to be able to simultaneously mask inner feelings and accurately "read" the facial expressions of others, at least when these expressions are at low intensity. The combined behavioral and physiological data suggest that the strategies an individual selects to control his or her own expression of emotion have important implications for interpersonal functioning.

  12. Impaired emotional facial expression recognition in alcoholics: are these deficits specific to emotional cues?

    PubMed

    Foisy, Marie-Line; Kornreich, Charles; Petiau, Cédric; Parez, Agathe; Hanak, Catherine; Verbanck, Paul; Pelc, Isidore; Philippot, Pierre

    2007-02-28

    Previous studies have repeatedly linked alcoholism is to impairment in emotional facial expression decoding. The present study aimed at extending previous findings while controlling for exposure times of stimuli. Further, a control task was added on the decoding of non-emotional facial features. Twenty-five alcoholic participants were compared to 26 control participants matched for age, sex and educational level. Participants performed two computer tasks consisting of presentation of photographs of faces for either 250 or 1000 ms. The first task required "yes" or "no" responses as rapidly as possible to questions regarding non-emotional features of the face (gender, age range and cultural identity). The second task involved a different set of photographs implicating emotional facial expression decoding, with the same exposure times. Again, rapid "yes" or "no" responses to trials combining 32 emotional facial expressions by eight emotional labels (happiness, sadness, fear, anger, disgust, surprise, shame, and contempt) were required from participants. Reaction times were recorded for both tasks. Alcoholic and control participants showed similar results in both tasks in terms of response accuracy. Yet, in the emotional facial expression task, alcoholic participants' responses matched more negative emotional labels, especially sadness. Further, alcoholics were slower than control participants specifically to answer emotional questions on emotional facial expression. No differences appeared on reaction times in the control task. Contrary to expectations, no interaction of stimulus time exposure and group was observed. Overall, these findings replicate and extend previous results on emotional facial expression decoding ability: Alcoholics are specifically impaired on emotional non-verbal behavior information processing: They are slower to correctly identify an emotion. PMID:17267048

  13. Common cues to emotion in the dynamic facial expressions of speech and song

    PubMed Central

    Livingstone, Steven R.; Thompson, William F.; Wanderley, Marcelo M.; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. PMID:25424388

  14. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    PubMed

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  15. Common cues to emotion in the dynamic facial expressions of speech and song.

    PubMed

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  16. Facial EMG responses to emotional expressions are related to emotion perception ability.

    PubMed

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  17. Electromyographic responses to emotional facial expressions in 6-7 year olds with autism spectrum disorders.

    PubMed

    Deschamps, P K H; Coppes, L; Kenemans, J L; Schutter, D J L G; Matthys, W

    2015-02-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old children with ASD and twenty-seven typically developing children. Even though results did not show differences in facial mimicry between children with ASD and typically developing children, impairment in social responsiveness was significantly associated with reduced fear mimicry in children with ASD. These findings demonstrate normal mimicry in children with ASD as compared to healthy controls, but that in children with ASD the degree of impairments in social responsiveness may be associated with reduced sensitivity to distress signals.

  18. Facial expression recognition based on image Euclidean distance-supervised neighborhood preserving embedding

    NASA Astrophysics Data System (ADS)

    Chen, Li; Li, Yingjie; Li, Haibin

    2014-11-01

    High-dimensional data often lie on relatively low-dimensional manifold, while the nonlinear geometry of that manifold is often embedded in the similarities between the data points. These similar structures are captured by Neighborhood Preserving Embedding (NPE) effectively. But NPE as an unsupervised method can't utilize class information to guide the procedure of nonlinear dimensionality reduction. They ignore the geometrical structure information of local data points and the spatial information of pixels, which leads to the failure of classification. For this problem, a feature extraction method based on Image Euclidean Distance-Supervised NPE (IED-SNPE) is proposed, and is applied to facial expression recognition. Firstly, it employs Image Euclidean Distance (IED) to characterize the dissimilarity of data points. And then the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points. Finally, it fuses prior nonlinear facial expression manifold of facial expression images and class-label information to extract discriminative features for expression recognition. In the classification experiments on JAFFE facial expression database, IED-SNPE is used for feature extraction and compared with NPE, SNPE, and IED-NPE. The results reveal that IED-SNPE not only the local structure of expression manifold preserves well but also explicitly considers the spatial relationships among pixels in the images. So it excels NPE in feature extraction and is highly competitive with those well-known feature extraction methods.

  19. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions.

  20. Coordination of gaze, facial expressions and vocalizations of early infant communication with mother and father.

    PubMed

    Colonnesi, Cristina; Zijlstra, Bonne J H; van der Zande, Annesophie; Bögels, Susan M

    2012-06-01

    Gaze direction, expressive behaviors and vocalizations are infants' first form of emotional communication. The present study examined the emotional configurations of these three behaviors during face-to-face situations and the effect of infants' and parents' gender. We observed 34 boys and 32 girls (mean age of 18 weeks) during the normal face-to-face interaction with their mother and with their father. Three main behaviors and their temporal co-occurrence were observed: gaze direction at the partner as an indication of infants' attention, positive and negative facial expressions as emotional communication, and vocalizations as first forms of utterances. Pairwise, infants' production of vocalizations, positive facial expressions and gaze were strongly coordinated with each. In addition, the majority of vocalizations produced during positive facial expressions coincided with gaze at the parent. Results on the effect of gender showed that infants (both boys and girls) produced coordinated patterns of positive facial expressions and gaze more often during the interaction with the mother as compared to the interaction with the father. Results contribute to the research on infants' early expression of emotions and gender differences.

  1. Reduced capacity in automatic processing of facial expression in restrictive anorexia nervosa and obesity.

    PubMed

    Cserjési, Renáta; Vermeulen, Nicolas; Lénárd, László; Luminet, Olivier

    2011-07-30

    There is growing evidence that disordered eating is associated with facial expression recognition and emotion processing problems. In this study, we investigated the question of whether anorexia and obesity occur on a continuum of attention bias towards negative facial expressions in comparison with healthy individuals of normal weight. Thirty-three patients with restrictive anorexia nervosa (AN-R), 30 patients with obesity (OB) and 63 healthy age and social-economic status matched controls were recruited. Our results indicated that AN-R patients were more attentive to angry faces and had difficulties in being attentive to positive expressions, whilst OB patients had problems in looking for or being attentive to negative expressions independently of self-reported depression and anxiety. Our findings did not support the idea that AN-R and OB occur on a continuum. We found that AN-R was associated with a reduced capacity in positive facial expression processing, whereas OB was associated with a reduced capacity in negative facial expressions processing. The social relevance of our findings and a possible explanation based upon neuroscience are discussed.

  2. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  3. A selective emotional decision-making bias elicited by facial expressions.

    PubMed

    Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B

    2012-01-01

    Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices.

  4. No effect of inversion on attentional and affective processing of facial expressions.

    PubMed

    Lipp, Ottmar V; Price, Sarah M; Tellegen, Cassandra L

    2009-04-01

    The decrease in recognition performance after face inversion has been taken to suggest that faces are processed holistically. Three experiments, 1 with schematic and 2 with photographic faces, were conducted to assess whether face inversion also affected visual search for and implicit evaluation of facial expressions of emotion. The 3 visual search experiments yielded the same differences in detection speed between different facial expressions of emotion for upright and inverted faces. Threat superiority effects, faster detection of angry than of happy faces among neutral background faces, were evident in 2 experiments. Face inversion did not affect explicit or implicit evaluation of face stimuli as assessed with verbal ratings and affective priming. Happy faces were evaluated as more positive than angry, sad, or fearful/scheming ones regardless of orientation. Taken together these results seem to suggest that the processing of facial expressions of emotion is not impaired if holistic processing is disrupted.

  5. A selective emotional decision-making bias elicited by facial expressions.

    PubMed

    Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B

    2012-01-01

    Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices. PMID:22438936

  6. Hemodynamic response of children with attention-deficit and hyperactive disorder (ADHD) to emotional facial expressions.

    PubMed

    Ichikawa, Hiroko; Nakato, Emi; Kanazawa, So; Shimamura, Keiichi; Sakuta, Yuiko; Sakuta, Ryoichi; Yamaguchi, Masami K; Kakigi, Ryusuke

    2014-10-01

    Children with attention-deficit/hyperactivity disorder (ADHD) have difficulty recognizing facial expressions. They identify angry expressions less accurately than typically developing (TD) children, yet little is known about their atypical neural basis for the recognition of facial expressions. Here, we used near-infrared spectroscopy (NIRS) to examine the distinctive cerebral hemodynamics of ADHD and TD children while they viewed happy and angry expressions. We measured the hemodynamic responses of 13 ADHD boys and 13 TD boys to happy and angry expressions at their bilateral temporal areas, which are sensitive to face processing. The ADHD children showed an increased concentration of oxy-Hb for happy faces but not for angry faces, while TD children showed increased oxy-Hb for both faces. Moreover, the individual peak latency of hemodynamic response in the right temporal area showed significantly greater variance in the ADHD group than in the TD group. Such atypical brain activity observed in ADHD boys may relate to their preserved ability to recognize a happy expression and their difficulty recognizing an angry expression. We firstly demonstrated that NIRS can be used to detect atypical hemodynamic response to facial expressions in ADHD children.

  7. Neural evidence for cultural differences in the valuation of positive facial expressions.

    PubMed

    Park, BoKyung; Tsai, Jeanne L; Chim, Louise; Blevins, Elizabeth; Knutson, Brian

    2016-02-01

    European Americans value excitement more and calm less than Chinese. Within cultures, European Americans value excited and calm states similarly, whereas Chinese value calm more than excited states. To examine how these cultural differences influence people's immediate responses to excited vs calm facial expressions, we combined a facial rating task with functional magnetic resonance imaging. During scanning, European American (n = 19) and Chinese (n = 19) females viewed and rated faces that varied by expression (excited, calm), ethnicity (White, Asian) and gender (male, female). As predicted, European Americans showed greater activity in circuits associated with affect and reward (bilateral ventral striatum, left caudate) while viewing excited vs calm expressions than did Chinese. Within cultures, European Americans responded to excited vs calm expressions similarly, whereas Chinese showed greater activity in these circuits in response to calm vs excited expressions regardless of targets' ethnicity or gender. Across cultural groups, greater ventral striatal activity while viewing excited vs. calm expressions predicted greater preference for excited vs calm expressions months later. These findings provide neural evidence that people find viewing the specific positive facial expressions valued by their cultures to be rewarding and relevant.

  8. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    PubMed

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  9. The processing of facial identity and expression is interactive, but dependent on task and experience.

    PubMed

    Yankouskaya, Alla; Humphreys, Glyn W; Rotshtein, Pia

    2014-01-01

    Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the "Garner" paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field.

  10. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    PubMed

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. PMID:26974742

  11. A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2004-01-01

    This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.

  12. [Oro-facial-digital syndrome type I: phenotypic variable expression].

    PubMed

    Boldrini, María Pía; Giovo, María Elsa; Bogado, Claudia

    2014-12-01

    Oral-facial-digital syndrome type 1 (OFD1; OMIM #311200) is a developmental disorder transmitted as an X-linked dominant condition with embryonic male lethality. It is associated with malformation of the oral cavity, face, and digits. Furthermore, it is characterized by the presence of milia, hypotrichosis and polycystic kidney disease. We present two cases with clinical diagnosis of oral-facial-digital syndrome type I with some phenotypic variability between them.

  13. The role of spatial frequency information in the recognition of facial expressions of pain.

    PubMed

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2015-09-01

    Being able to detect pain from facial expressions is critical for pain communication. Alongside identifying the specific facial codes used in pain recognition, there are other types of more basic perceptual features, such as spatial frequency (SF), which refers to the amount of detail in a visual display. Low SF carries coarse information, which can be seen from a distance, and high SF carries fine-detailed information that can only be perceived when viewed close up. As this type of basic information has not been considered in the recognition of pain, we therefore investigated the role of low-SF and high-SF information in the decoding of facial expressions of pain. Sixty-four pain-free adults completed 2 independent tasks: a multiple expression identification task of pain and core emotional expressions and a dual expression "either-or" task (pain vs fear, pain vs happiness). Although both low-SF and high-SF information make the recognition of pain expressions possible, low-SF information seemed to play a more prominent role. This general low-SF bias would seem an advantageous way of potential threat detection, as facial displays will be degraded if viewed from a distance or in peripheral vision. One exception was found, however, in the "pain-fear" task, where responses were not affected by SF type. Together, this not only indicates a flexible role for SF information that depends on task parameters (goal context) but also suggests that in challenging visual conditions, we perceive an overall affective quality of pain expressions rather than detailed facial features. PMID:26075962

  14. The role of spatial frequency information in the recognition of facial expressions of pain.

    PubMed

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2015-09-01

    Being able to detect pain from facial expressions is critical for pain communication. Alongside identifying the specific facial codes used in pain recognition, there are other types of more basic perceptual features, such as spatial frequency (SF), which refers to the amount of detail in a visual display. Low SF carries coarse information, which can be seen from a distance, and high SF carries fine-detailed information that can only be perceived when viewed close up. As this type of basic information has not been considered in the recognition of pain, we therefore investigated the role of low-SF and high-SF information in the decoding of facial expressions of pain. Sixty-four pain-free adults completed 2 independent tasks: a multiple expression identification task of pain and core emotional expressions and a dual expression "either-or" task (pain vs fear, pain vs happiness). Although both low-SF and high-SF information make the recognition of pain expressions possible, low-SF information seemed to play a more prominent role. This general low-SF bias would seem an advantageous way of potential threat detection, as facial displays will be degraded if viewed from a distance or in peripheral vision. One exception was found, however, in the "pain-fear" task, where responses were not affected by SF type. Together, this not only indicates a flexible role for SF information that depends on task parameters (goal context) but also suggests that in challenging visual conditions, we perceive an overall affective quality of pain expressions rather than detailed facial features.

  15. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    PubMed

    Palumbo, Letizia; Jellema, Tjeerd

    2013-01-01

    Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding. PMID:23409112

  16. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    PubMed

    Palumbo, Letizia; Jellema, Tjeerd

    2013-01-01

    Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.

  17. Time-Delay Neural Network for Continuous Emotional Dimension Prediction From Facial Expression Sequences.

    PubMed

    Meng, Hongying; Bianchi-Berthouze, Nadia; Deng, Yangdong; Cheng, Jinkuang; Cosmas, John P

    2016-04-01

    Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a time-delay neural network (TDNN) is proposed to model the temporal relationships between consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic facial expressions. The proposed approach has won the affect recognition sub-challenge of the Third International Audio/Visual Emotion Recognition Challenge. PMID:25910269

  18. Multiple faces of pain: effects of chronic pain on the brain regulation of facial expression.

    PubMed

    Vachon-Presseau, Etienne; Roy, Mathieu; Woo, Choong-Wan; Kunz, Miriam; Martel, Marc-Olivier; Sullivan, Michael J; Jackson, Philip L; Wager, Tor D; Rainville, Pierre

    2016-08-01

    Pain behaviors are shaped by social demands and learning processes, and chronic pain has been previously suggested to affect their meaning. In this study, we combined functional magnetic resonance imaging with in-scanner video recording during thermal pain stimulations and use multilevel mediation analyses to study the brain mediators of pain facial expressions and the perception of pain intensity (self-reports) in healthy individuals and patients with chronic back pain (CBP). Behavioral data showed that the relation between pain expression and pain report was disrupted in CBP. In both patients with CBP and healthy controls, brain activity varying on a trial-by-trial basis with pain facial expressions was mainly located in the primary motor cortex and completely dissociated from the pattern of brain activity varying with pain intensity ratings. Stronger activity was observed in CBP specifically during pain facial expressions in several nonmotor brain regions such as the medial prefrontal cortex, the precuneus, and the medial temporal lobe. In sharp contrast, no moderating effect of chronic pain was observed on brain activity associated with pain intensity ratings. Our results demonstrate that pain facial expressions and pain intensity ratings reflect different aspects of pain processing and support psychosocial models of pain suggesting that distinctive mechanisms are involved in the regulation of pain behaviors in chronic pain. PMID:27411160

  19. Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations.

    PubMed

    Wenzler, Sofia; Levine, Sarah; van Dick, Rolf; Oertel-Knöchel, Viola; Aviezer, Hillel

    2016-09-01

    According to psychological models as well as common intuition, intense positive and negative situations evoke highly distinct emotional expressions. Nevertheless, recent work has shown that when judging isolated faces, the affective valence of winning and losing professional tennis players is hard to differentiate. However, expressions produced by professional athletes during publicly broadcasted sports events may be strategically controlled. To shed light on this matter we examined if ordinary people's spontaneous facial expressions evoked during highly intense situations are diagnostic for the situational valence. In Experiment 1 we compared reactions with highly intense positive situations (surprise soldier reunions) versus highly intense negative situations (terror attacks). In Experiment 2, we turned to children and compared facial reactions with highly positive situations (e.g., a child receiving a surprise trip to Disneyland) versus highly negative situations (e.g., a child discovering her parents ate up all her Halloween candy). The results demonstrate that facial expressions of both adults and children are often not diagnostic for the valence of the situation. These findings demonstrate the ambiguity of extreme facial expressions and highlight the importance of context in everyday emotion perception. (PsycINFO Database Record

  20. Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations.

    PubMed

    Wenzler, Sofia; Levine, Sarah; van Dick, Rolf; Oertel-Knöchel, Viola; Aviezer, Hillel

    2016-09-01

    According to psychological models as well as common intuition, intense positive and negative situations evoke highly distinct emotional expressions. Nevertheless, recent work has shown that when judging isolated faces, the affective valence of winning and losing professional tennis players is hard to differentiate. However, expressions produced by professional athletes during publicly broadcasted sports events may be strategically controlled. To shed light on this matter we examined if ordinary people's spontaneous facial expressions evoked during highly intense situations are diagnostic for the situational valence. In Experiment 1 we compared reactions with highly intense positive situations (surprise soldier reunions) versus highly intense negative situations (terror attacks). In Experiment 2, we turned to children and compared facial reactions with highly positive situations (e.g., a child receiving a surprise trip to Disneyland) versus highly negative situations (e.g., a child discovering her parents ate up all her Halloween candy). The results demonstrate that facial expressions of both adults and children are often not diagnostic for the valence of the situation. These findings demonstrate the ambiguity of extreme facial expressions and highlight the importance of context in everyday emotion perception. (PsycINFO Database Record PMID:27337681

  1. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  2. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    PubMed

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar.

  3. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    PubMed

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar. PMID:23962953

  4. Investigating the brain basis of facial expression perception using multi-voxel pattern analysis.

    PubMed

    Wegrzyn, Martin; Riehle, Marcel; Labudda, Kirsten; Woermann, Friedrich; Baumgartner, Florian; Pollmann, Stefan; Bien, Christian G; Kissler, Johanna

    2015-08-01

    Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised. PMID:26046623

  5. The role of the amygdala in facial emotional expression during a discrimination task.

    PubMed

    Carvajal, Fernando; Rubio, Sandra; Martín, Pilar; Amarante, Clara; García-Sola, Rafael

    2007-02-01

    A total of 50 patients with temporal lobe epilepsy with unilateral resection of the hippocampus and the amygdala were studied: --27 with left lobectomy (LTL group) and 23 with right lobectomy (RTL group)--; and 28 healthy control participants (HC group). The task consisted of identifying the dissimilar photograph from a group of photographs of the same face. The difference could correspond to the identity of the model or the facial expression (happiness, anger, sadness and fear). The results showed that when the difference in the photograph resided in the identity of the model, the RTL group made more mistakes than the HC group. When the facial expression was the distinguishing feature, mean response latency was longer in the LTL group than in the HC group. Comparison of the emotions revealed that the greatest differences were obtained with the fear expression, in all three participant groups. The dissociation of neural circuits responsible for processing facial expressions is discussed and, especially, the role of the left amygdala to discriminate between facial expressions. PMID:17295979

  6. Facial expressions of emotions: recognition accuracy and affective reactions during late childhood.

    PubMed

    Mancini, Giacomo; Agnoli, Sergio; Baldaro, Bruno; Bitti, Pio E Ricci; Surcinelli, Paola

    2013-01-01

    The present study examined the development of recognition ability and affective reactions to emotional facial expressions in a large sample of school-aged children (n = 504, ages 8-11 years of age). Specifically, the study aimed to investigate if changes in the emotion recognition ability and the affective reactions associated with the viewing of facial expressions occur during late childhood. Moreover, because small but robust gender differences during late-childhood have been proposed, the effects of gender on the development of emotion recognition and affective responses were examined. The results showed an overall increase in emotional face recognition ability from 8 to 11 years of age, particularly for neutral and sad expressions. However, the increase in sadness recognition was primarily due to the development of this recognition in boys. Moreover, our results indicate different developmental trends in males and females regarding the recognition of disgust. Last, developmental changes in affective reactions to emotional facial expressions were found. Whereas recognition ability increased over the developmental time period studied, affective reactions elicited by facial expressions were characterized by a decrease in arousal over the course of late childhood.

  7. Neural responses to facial expressions support the role of the amygdala in processing threat.

    PubMed

    Mattavelli, Giulia; Sormaz, Mladen; Flack, Tessa; Asghar, Aziz U R; Fan, Siyan; Frey, Julia; Manssuer, Luis; Usten, Deniz; Young, Andrew W; Andrews, Timothy J

    2014-11-01

    The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala's response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results.

  8. Human amygdala response to dynamic facial expressions of positive and negative surprise.

    PubMed

    Vrticka, Pascal; Lordier, Lara; Bediou, Benoît; Sander, David

    2014-02-01

    Although brain imaging evidence accumulates to suggest that the amygdala plays a key role in the processing of novel stimuli, only little is known about its role in processing expressed novelty conveyed by surprised faces, and even less about possible interactive encoding of novelty and valence. Those investigations that have already probed human amygdala involvement in the processing of surprised facial expressions either used static pictures displaying negative surprise (as contained in fear) or "neutral" surprise, and manipulated valence by contextually priming or subjectively associating static surprise with either negative or positive information. Therefore, it still remains unresolved how the human amygdala differentially processes dynamic surprised facial expressions displaying either positive or negative surprise. Here, we created new artificial dynamic 3-dimensional facial expressions conveying surprise with an intrinsic positive (wonderment) or negative (fear) connotation, but also intrinsic positive (joy) or negative (anxiety) emotions not containing any surprise, in addition to neutral facial displays either containing ("typical surprise" expression) or not containing ("neutral") surprise. Results showed heightened amygdala activity to faces containing positive (vs. negative) surprise, which may either correspond to a specific wonderment effect as such, or to the computation of a negative expected value prediction error. Findings are discussed in the light of data obtained from a closely matched nonsocial lottery task, which revealed overlapping activity within the left amygdala to unexpected positive outcomes. PMID:24219397

  9. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    PubMed

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information. PMID:26375031

  10. Speed, amplitude, and asymmetry of lip movement in voluntary puckering and blowing expressions: implications for facial assessment.

    PubMed

    Schmidt, Karen L; VanSwearingen, Jessie M; Levenstein, Rachel M

    2005-07-01

    The context of voluntary movement during facial assessment has significant effects on the activity of facial muscles. Using automated facial analysis, we found that healthy subjects instructed to blow produced lip movements that were longer in duration and larger in amplitude than when subjects were instructed to pucker. We also determined that lip movement for puckering expressions was more asymmetric than lip movement in blowing. Differences in characteristics of lip movement were noted using facial movement analysis and were associated with the context of the movement. The impact of the instructions given for voluntary movement on the characteristics of facial movement might have important implications for assessing the capabilities and deficits of movement control in individuals with facial movement disorders. If results generalize to the clinical context, assessment of generally focused voluntary facial expressions might inadequately demonstrate the full range of facial movement capability of an individual patient.

  11. Muscles of facial expression in the chimpanzee (Pan troglodytes): descriptive, comparative and phylogenetic contexts

    PubMed Central

    Burrows, Anne M; Waller, Bridget M; Parr, Lisa A; Bonar, Christopher J

    2006-01-01

    Facial expressions are a critical mode of non-vocal communication for many mammals, particularly non-human primates. Although chimpanzees (Pan troglodytes) have an elaborate repertoire of facial signals, little is known about the facial expression (i.e. mimetic) musculature underlying these movements, especially when compared with some other catarrhines. Here we present a detailed description of the facial muscles of the chimpanzee, framed in comparative and phylogenetic contexts, through the dissection of preserved faces using a novel approach. The arrangement and appearance of muscles were noted and compared with previous studies of chimpanzees and with prosimians, cercopithecoids and humans. The results showed 23 mimetic muscles in P. troglodytes, including a thin sphincter colli muscle, reported previously only in adult prosimians, a bi-layered zygomaticus major muscle and a distinct risorius muscle. The presence of these muscles in such definition supports previous studies that describe an elaborate and highly graded facial communication system in this species that remains qualitatively different from that reported for other non-human primate species. In addition, there are minimal anatomical differences between chimpanzees and humans, contrary to conclusions from previous studies. These results amplify the importance of understanding facial musculature in primate taxa, which may hold great taxonomic value. PMID:16441560

  12. The primacy of negative interpretations when resolving the valence of ambiguous facial expressions.

    PubMed

    Neta, Maital; Whalen, Paul J

    2010-07-01

    Low-spatial-frequency (LSF) visual information is processed in an elemental fashion before a finer analysis of high-spatial-frequency information. Further, the amygdala is particularly responsive to LSF information contained within negative (e.g., fearful) facial expressions. In a separate line of research, it has been shown that surprised facial expressions are ambiguous in that they can be interpreted as either negatively or positively valenced. More negative interpretations of surprise are associated with increased ventral amygdala activity. In this report, we show that LSF presentations of surprised expressions bias the interpretation of surprised expressions in a negative direction, a finding suggesting that negative interpretations are first and fast during the resolution of ambiguous valence. We also examined the influence of subjects' positivity-negativity bias on this effect.

  13. Neural substrates of human facial expression of pleasant emotion induced by comic films: a PET Study.

    PubMed

    Iwase, Masao; Ouchi, Yasuomi; Okada, Hiroyuki; Yokoyama, Chihiro; Nobezawa, Shuji; Yoshikawa, Etsuji; Tsukada, Hideo; Takeda, Masaki; Yamashita, Ko; Takeda, Masatoshi; Yamaguti, Kouzi; Kuratsune, Hirohiko; Shimizu, Akira; Watanabe, Yasuyoshi

    2002-10-01

    Laughter or smile is one of the emotional expressions of pleasantness with characteristic contraction of the facial muscles, of which the neural substrate remains to be explored. This currently described study is the first to investigate the generation of human facial expression of pleasant emotion using positron emission tomography and H(2)(15)O. Regional cerebral blood flow (rCBF) during laughter/smile induced by visual comics and the magnitude of laughter/smile indicated significant correlation in the bilateral supplementary motor area (SMA) and left putamen (P < 0.05, corrected), but no correlation in the primary motor area (M1). In the voluntary facial movement, significant correlation between rCBF and the magnitude of EMG was found in the face area of bilateral M1 and the SMA (P < 0.001, uncorrected). Laughter/smile, as opposed to voluntary movement, activated the visual association areas, left anterior temporal cortex, left uncus, and orbitofrontal and medial prefrontal cortices (P < 0.05, corrected), whereas voluntary facial movement generated by mimicking a laughing/smiling face activated the face area of the left M1 and bilateral SMA, compared with laughter/smile (P < 0.05, corrected). We demonstrated distinct neural substrates of emotional and volitional facial expression and defined cognitive and experiential processes of a pleasant emotion, laughter/smile.

  14. The Relative Power of an Emotion's Facial Expression, Label, and Behavioral Consequence to Evoke Preschoolers' Knowledge of Its Cause

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2004-01-01

    Lay people and scientists alike assume that, especially for young children, facial expressions are a strong cue to another's emotion. We report a study in which children (N=120; 3-4 years) described events that would cause basic emotions (surprise, fear, anger, disgust, sadness) presented as its facial expression, as its label, or as its…

  15. Intermodal Perception of Fully Illuminated and Point Light Displays of Dynamic Facial Expressions by 7-Month-Old Infants.

    ERIC Educational Resources Information Center

    Soken, Nelson; And Others

    This study considered two questions about infants' perception of affective expressions: (1) Can infants distinguish between happiness and anger on the basis of facial motion information alone? (2) Can infants detect a correspondence between happy and angry facial and vocal expressions by different people? A total of 40 infants of 7 months of age…

  16. Emotional Facial and Vocal Expressions during Story Retelling by Children and Adolescents with High-Functioning Autism

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen

    2013-01-01

    Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were…

  17. A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives

    PubMed Central

    Martinez, Aleix; Du, Shichuan

    2013-01-01

    In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion—the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can

  18. Feature-based representations of emotional facial expressions in the human amygdala.

    PubMed

    Ahs, Fredrik; Davis, Caroline F; Gorka, Adam X; Hariri, Ahmad R

    2014-09-01

    The amygdala plays a central role in processing facial affect, responding to diverse expressions and features shared between expressions. Although speculation exists regarding the nature of relationships between expression- and feature-specific amygdala reactivity, this matter has not been fully explored. We used functional magnetic resonance imaging and principal component analysis (PCA) in a sample of 300 young adults, to investigate patterns related to expression- and feature-specific amygdala reactivity to faces displaying neutral, fearful, angry or surprised expressions. The PCA revealed a two-dimensional correlation structure that distinguished emotional categories. The first principal component separated neutral and surprised from fearful and angry expressions, whereas the second principal component separated neutral and angry from fearful and surprised expressions. This two-dimensional correlation structure of amygdala reactivity may represent specific feature-based cues conserved across discrete expressions. To delineate which feature-based cues characterized this pattern, face stimuli were averaged and then subtracted according to their principal component loadings. The first principal component corresponded to displacement of the eyebrows, whereas the second principal component corresponded to increased exposure of eye whites together with movement of the brow. Our results suggest a convergent representation of facial affect in the amygdala reflecting feature-based processing of discrete expressions. PMID:23887817

  19. Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoming; Zhang, Shiqing

    2012-12-01

    Given the nonlinear manifold structure of facial images, a new kernel-based supervised manifold learning algorithm based on locally linear embedding (LLE), called discriminant kernel locally linear embedding (DKLLE), is proposed for facial expression recognition. The proposed DKLLE aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. DKLLE is compared with LLE, supervised locally linear embedding (SLLE), principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA), and kernel linear discriminant analysis (KLDA). Experimental results on two benchmarking facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database, demonstrate the effectiveness and promising performance of DKLLE.

  20. Facial expression recognition ability among women with borderline personality disorder: implications for emotion regulation?

    PubMed

    Wagner, A W; Linehan, M M

    1999-01-01

    This study examined recognition of facial expressions of emotion among women diagnosed with borderline personality disorder (BPD; n = 21), compared to a group of women with histories of childhood sexual abuse with no current or prior diagnosis of BPD (n = 21) and a group of women with no history of sexual abuse or BPD (n = 20). Facial recognition was assessed by a slide set developed by Ekman and Matsumoto (Japanese and Caucasian Facial Expressions of Emotion and Neutral Faces, 1992), expanded and improved from previous slide sets, and utilized a coding system that allowed for free responses rather than the more typical fixed-response format. Results indicated that borderline individuals were primarily accurate perceivers of others' emotions and showed a tendency toward heightened sensitivity on recognition of fear, specifically. Results are discussed in terms of emotional appraisal ability and emotion dysregulation among individuals with BPD. PMID:10633314

  1. Culture shapes 7-month-olds' perceptual strategies in discriminating facial expressions of emotion.

    PubMed

    Geangu, Elena; Ichikawa, Hiroko; Lao, Junpeng; Kanazawa, So; Yamaguchi, Masami K; Caldara, Roberto; Turati, Chiara

    2016-07-25

    Emotional facial expressions are thought to have evolved because they play a crucial role in species' survival. From infancy, humans develop dedicated neural circuits [1] to exhibit and recognize a variety of facial expressions [2]. But there is increasing evidence that culture specifies when and how certain emotions can be expressed - social norms - and that the mature perceptual mechanisms used to transmit and decode the visual information from emotional signals differ between Western and Eastern adults [3-5]. Specifically, the mouth is more informative for transmitting emotional signals in Westerners and the eye region for Easterners [4], generating culture-specific fixation biases towards these features [5]. During development, it is recognized that cultural differences can be observed at the level of emotional reactivity and regulation [6], and to the culturally dominant modes of attention [7]. Nonetheless, to our knowledge no study has explored whether culture shapes the processing of facial emotional signals early in development. The data we report here show that, by 7 months, infants from both cultures visually discriminate facial expressions of emotion by relying on culturally distinct fixation strategies, resembling those used by the adults from the environment in which they develop [5]. PMID:27458908

  2. Facial Expression of Affect in Children with Cornelia de Lange Syndrome

    ERIC Educational Resources Information Center

    Collis, L.; Moss, J.; Jutley, J.; Cornish, K.; Oliver, C.

    2008-01-01

    Background: Individuals with Cornelia de Lange syndrome (CdLS) have been reported to show comparatively high levels of flat and negative affect but there have been no empirical evaluations. In this study, we use an objective measure of facial expression to compare affect in CdLS with that seen in Cri du Chat syndrome (CDC) and a group of…

  3. Abnormal Amygdala and Prefrontal Cortex Activation to Facial Expressions in Pediatric Bipolar Disorder

    ERIC Educational Resources Information Center

    Garrett, Amy S.; Reiss, Allan L.; Howe, Meghan E.; Kelley, Ryan G.; Singh, Manpreet K.; Adleman, Nancy E.; Karchemskiy, Asya; Chang, Kiki D.

    2012-01-01

    Objective: Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late…

  4. Cradling Side Preference Is Associated with Lateralized Processing of Baby Facial Expressions in Females

    ERIC Educational Resources Information Center

    Huggenberger, Harriet J.; Suter, Susanne E.; Reijnen, Ester; Schachinger, Hartmut

    2009-01-01

    Women's cradling side preference has been related to contralateral hemispheric specialization of processing emotional signals; but not of processing baby's facial expression. Therefore, 46 nulliparous female volunteers were characterized as left or non-left holders (HG) during a doll holding task. During a signal detection task they were then…

  5. The Effects of Early Institutionalization on the Discrimination of Facial Expressions of Emotion in Young Children

    ERIC Educational Resources Information Center

    Jeon, Hana; Moulson, Margaret C.; Fox, Nathan; Zeanah, Charles; Nelson, Charles A., III

    2010-01-01

    The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in three groups of 42-month-old children. One group consisted of children abandoned at birth who were randomly assigned to Care-as-Usual (institutional care) following a baseline assessment. Another group consisted of children…

  6. Effects of Context and Facial Expression on Imitation Tasks in Preschool Children with Autism

    ERIC Educational Resources Information Center

    Markodimitraki, Maria; Kypriotaki, Maria; Ampartzaki, Maria; Manolitsis, George

    2013-01-01

    The present study explored the effect of the context in which an imitation act occurs (elicited/spontaneous) and the experimenter's facial expression (neutral or smiling) during the imitation task with young children with autism and typically developing children. The participants were 10 typically developing children and 10 children with…

  7. Processing of Facial Expressions of Emotions by Adults with Down Syndrome and Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Carvajal, Fernando; Fernandez-Alcaraz, Camino; Rueda, Maria; Sarrion, Louise

    2012-01-01

    The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an…

  8. Concealing of facial expressions by a wild Barbary macaque (Macaca sylvanus).

    PubMed

    Thunström, Maria; Kuchenbuch, Paul; Young, Christopher

    2014-07-01

    Behavioural research on non-vocal communication among non-human primates and its possible links to the origin of human language is a long-standing research topic. Because human language is under voluntary control, it is of interest whether this is also true for any communicative signals of other species. It has been argued that the behaviour of hiding a facial expression with one's hand supports the idea that gestures might be under more voluntary control than facial expressions among non-human primates, and it has also been interpreted as a sign of intentionality. So far, the behaviour has only been reported twice, for single gorilla and chimpanzee individuals, both in captivity. Here, we report the first observation of concealing of facial expressions by a monkey, a Barbary macaque (Macaca sylvanus), living in the wild. On eight separate occasions between 2009 and 2011 an adult male was filmed concealing two different facial expressions associated with play and aggression ("play face" and "scream face"), 22 times in total. The videos were analysed in detail, including gaze direction, hand usage, duration, and individuals present. This male was the only individual in his group to manifest this behaviour, which always occurred in the presence of a dominant male. Several possible interpretations of the function of the behaviour are discussed. The observations in this study indicate that the gestural communication and cognitive abilities of monkeys warrant more research attention.

  9. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  10. Assessment of Learners' Attention to E-Learning by Monitoring Facial Expressions for Computer Network Courses

    ERIC Educational Resources Information Center

    Chen, Hong-Ren

    2012-01-01

    Recognition of students' facial expressions can be used to understand their level of attention. In a traditional classroom setting, teachers guide the classes and continuously monitor and engage the students to evaluate their understanding and progress. Given the current popularity of e-learning environments, it has become important to assess the…

  11. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  12. Interpretation of infant facial expression in the context of maternal postnatal depression.

    PubMed

    Stein, Alan; Arteche, Adriane; Lehtonen, Annukka; Craske, Michelle; Harvey, Allison; Counsell, Nicholas; Murray, Lynne

    2010-06-01

    Postnatal maternal depression is associated with difficulties in maternal responsiveness. As most signals arising from the infant come from facial expressions one possible explanation for these difficulties is that mothers with postnatal depression are differentially affected by particular infant facial expressions. Thus, this study investigates the effects of postnatal depression on mothers' perceptions of infant facial expressions. Participants (15 controls, 15 depressed and 15 anxious mothers) were asked to rate a number of infant facial expressions, ranging from very positive to very negative. Each face was shown twice, for a short and for a longer period of time in random order. Results revealed that mothers used more extreme ratings when shown the infant faces (i.e. more negative or more positive) for a longer period of time. Mothers suffering from postnatal depression were more likely to rate negative infant faces shown for a longer period more negatively than controls. The differences were specific to depression rather than an effect of general postnatal psychopathology-as no differences were observed between anxious mothers and controls. There were no other significant differences in maternal ratings of infant faces showed for short periods or for positive or neutral valence faces of either length. The findings that mothers with postnatal depression rate negative infant faces more negatively indicate that appraisal bias might underlie some of the difficulties that these mothers have in responding to their own infants signals. PMID:20381873

  13. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    ERIC Educational Resources Information Center

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two 6-month-old infant-mother dyads who each engaged in a face-to-face…

  14. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    ERIC Educational Resources Information Center

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  15. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  16. The Effect of Verbal Statements of Context on Facial Expressions of Emotion.

    ERIC Educational Resources Information Center

    Knudsen, Harold R.; Muzekari, Louis H.

    A study was conducted to examine the extent to which verbal statements of context influenced the perception of emotion in facial expressions. In addition, it examined the pairing of both congruent and incongruent stimulus sources. The subjects, 98 college students, were shown photographs of four male and four female actors displaying facial…

  17. The Understanding of the Emotional Meaning of Facial Expressions in People with Autism.

    ERIC Educational Resources Information Center

    Celani, Giorgio; Battacchi, Marco Walter; Arcidiacono, Letizia

    1999-01-01

    Ten children (ages 5 to 16) with autism, 10 with Down syndrome, and 10 controls were tested on a task that required matching faces on the basis of emotion and on a task that required judging pleasantness of a face. Children with autism performed worse on both facial-expression-of-emotion subtasks. (Author/CR)

  18. Understanding Emotions from Standardized Facial Expressions in Autism and Normal Development

    ERIC Educational Resources Information Center

    Castelli, Fulvia

    2005-01-01

    The study investigated the recognition of standardized facial expressions of emotion (anger, fear, disgust, happiness, sadness, surprise) at a perceptual level (experiment 1) and at a semantic level (experiments 2 and 3) in children with autism (N= 20) and normally developing children (N= 20). Results revealed that children with autism were as…

  19. Infants' Intermodal Perception of Canine ("Canis Familairis") Facial Expressions and Vocalizations

    ERIC Educational Resources Information Center

    Flom, Ross; Whipple, Heather; Hyde, Daniel

    2009-01-01

    From birth, human infants are able to perceive a wide range of intersensory relationships. The current experiment examined whether infants between 6 months and 24 months old perceive the intermodal relationship between aggressive and nonaggressive canine vocalizations (i.e., barks) and appropriate canine facial expressions. Infants simultaneously…

  20. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    ERIC Educational Resources Information Center

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  1. Singing emotionally: a study of pre-production, production, and post-production facial expressions

    PubMed Central

    Quinto, Lena R.; Thompson, William F.; Kroos, Christian; Palmer, Caroline

    2014-01-01

    Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalization (pre-production), during vocalization (production), and immediately after vocalization (post-production). The stimuli were recordings of seven vocalists' facial movements as they sang short (14 syllable) melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgment varied with singer, emotion, and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements was largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as “neutral.” An analysis of singers' facial movements revealed systematic changes as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization. PMID:24808868

  2. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    PubMed

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD.

  3. Does Parkinson's disease lead to alterations in the facial expression of pain?

    PubMed

    Priebe, Janosch A; Kunz, Miriam; Morcinek, Christian; Rieckmann, Peter; Lautenbacher, Stefan

    2015-12-15

    Hypomimia which refers to a reduced degree in facial expressiveness is a common sign in Parkinson's disease (PD). The objective of our study was to investigate how hypomimia affects PD patients' facial expression of pain. The facial expressions of 23 idiopathic PD patients in the Off-phase (without dopaminergic medication) and On-phase (after dopaminergic medication intake) and 23 matched controls in response to phasic heat-pain and a temporal summation procedure were recorded and analyzed for overall and specific alterations using the Facial Action Coding System (FACS). We found reduced overall facial activity in response to pain in PD patients in the Off which was less pronounced in the On. Especially the highly pain-relevant eye-narrowing occurred less frequently in PD patients than in controls in both phases while frequencies of other pain-relevant movements, like upper lip raise (in the On) and contraction of the eyebrows (in both phases), did not differ between groups. Moreover, opening of the mouth (which is often not considered as pain-relevant) was the most frequently displayed movement in PD patients, whereas eye-narrowing was the most frequent movement in controls. Not only overall quantitative changes in the degree of facial pain expressiveness occurred in PD patients but also qualitative changes were found. The latter refer to a strongly affected encoding of the sensory dimension of pain (eye-narrowing) while the encoding of the affective dimension of pain (contradiction of the eyebrows) was preserved. This imbalanced pain signal might affect pain communication and pain assessment.

  4. The Odor Context Facilitates the Perception of Low-Intensity Facial Expressions of Emotion.

    PubMed

    Leleu, Arnaud; Demily, Caroline; Franck, Nicolas; Durand, Karine; Schaal, Benoist; Baudouin, Jean-Yves

    2015-01-01

    It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information.

  5. Facial expression recognition in peripheral versus central vision: role of the eyes and the mouth.

    PubMed

    Calvo, Manuel G; Fernández-Martín, Andrés; Nummenmaa, Lauri

    2014-03-01

    This study investigated facial expression recognition in peripheral relative to central vision, and the factors accounting for the recognition advantage of some expressions in the visual periphery. Whole faces or only the eyes or the mouth regions were presented for 150 ms, either at fixation or extrafoveally (2.5° or 6°), followed by a backward mask and a probe word. Results indicated that (a) all the basic expressions were recognized above chance level, although performance in peripheral vision was less impaired for happy than for non-happy expressions, (b) the happy face advantage remained when only the mouth region was presented, and (c) the smiling mouth was the most visually salient and most distinctive facial feature of all expressions. This suggests that the saliency and the diagnostic value of the smile account for the advantage in happy face recognition in peripheral vision. Because of saliency, the smiling mouth accrues sensory gain and becomes resistant to visual degradation due to stimulus eccentricity, thus remaining accessible extrafoveally. Because of diagnostic value, the smile provides a distinctive single cue of facial happiness, thus bypassing integration of face parts and reducing susceptibility to breakdown of configural processing in peripheral vision.

  6. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  7. Action Unit Models of Facial Expression of Emotion in the Presence of Speech

    PubMed Central

    Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561

  8. A facial expression image database and norm for Asian population: a preliminary report

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  9. Functionally relevant responses to human facial expressions of emotion in the domestic horse (Equus caballus).

    PubMed

    Smith, Amy Victoria; Proops, Leanne; Grounds, Kate; Wathan, Jennifer; McComb, Karen

    2016-02-01

    Whether non-human animals can recognize human signals, including emotions, has both scientific and applied importance, and is particularly relevant for domesticated species. This study presents the first evidence of horses' abilities to spontaneously discriminate between positive (happy) and negative (angry) human facial expressions in photographs. Our results showed that the angry faces induced responses indicative of a functional understanding of the stimuli: horses displayed a left-gaze bias (a lateralization generally associated with stimuli perceived as negative) and a quicker increase in heart rate (HR) towards these photographs. Such lateralized responses towards human emotion have previously only been documented in dogs, and effects of facial expressions on HR have not been shown in any heterospecific studies. Alongside the insights that these findings provide into interspecific communication, they raise interesting questions about the generality and adaptiveness of emotional expression and perception across species.

  10. Why are you angry with me? Facial expressions of threat influence perception of gaze direction.

    PubMed

    Ewbank, Michael P; Jennings, Caroline; Calder, Andrew J

    2009-01-01

    Gaze direction can influence the processing of facial expressions. Angry faces are judged more angry when displaying a direct gaze compared to an averted gaze. We investigated whether facial expressions have a reciprocal influence on the perception of gaze. Participants judged the gaze of angry, fearful and neutral faces across a range of gaze directions. Angry faces were perceived as looking at the observer over a wider range than were fearful or neutral faces, which did not significantly differ. This effect was eliminated when presenting inverted faces, suggesting these results cannot be accounted for by differences in visible eye information. Our findings suggest the existence of a reciprocal influence between gaze direction and angry expressions.

  11. Facial Expression in Response to Smell and Taste Stimuli in Small and Appropriate for Gestational Age Newborns.

    PubMed

    Rotstein, Michael; Stolar, Orit; Uliel, Shimrit; Mandel, Dror; Mani, Ariel; Dollberg, Shaul; Reifen, Ram; Steiner, Jacob E; Harel, Shaul; Leitner, Yael

    2015-10-01

    Small for gestational age newborns can later suffer from eating difficulties and slow growth. Nutritional preferences can be influenced by changes in sensory perception of smell and taste. To determine whether these could be detected at birth, the authors examined the different recognition pattern of smell and taste in small for gestational age newborns compared to appropriate for gestational age controls, as expressed by gusto-facial and naso-facial reflexes. The authors performed video analysis of facial expressions of 10 small for gestational age and 12 control newborns exposed to various tastes and smells. No difference in the facial recognition patterns for taste or smell was demonstrated between small for gestational age and controls, except for perception of distilled water. Newborns show recognizable patterns of facial expression in response to taste and smell stimuli. Perception of taste and smell in small for gestational age newborns is not different from controls, as measured by the method of facial recognition.

  12. The role of the cannabinoid receptor in adolescents' processing of facial expressions.

    PubMed

    Ewald, Anais; Becker, Susanne; Heinrich, Angela; Banaschewski, Tobias; Poustka, Luise; Bokde, Arun; Büchel, Christian; Bromberg, Uli; Cattrell, Anna; Conrod, Patricia; Desrivières, Sylvane; Frouin, Vincent; Papadopoulos-Orfanos, Dimitri; Gallinat, Jürgen; Garavan, Hugh; Heinz, Andreas; Walter, Henrik; Ittermann, Bernd; Gowland, Penny; Paus, Tomáš; Martinot, Jean-Luc; Paillère Martinot, Marie-Laure; Smolka, Michael N; Vetter, Nora; Whelan, Rob; Schumann, Gunter; Flor, Herta; Nees, Frauke

    2016-01-01

    The processing of emotional faces is an important prerequisite for adequate social interactions in daily life, and might thus specifically be altered in adolescence, a period marked by significant changes in social emotional processing. Previous research has shown that the cannabinoid receptor CB1R is associated with longer gaze duration and increased brain responses in the striatum to happy faces in adults, yet, for adolescents, it is not clear whether an association between CBR1 and face processing exists. In the present study we investigated genetic effects of the two CB1R polymorphisms, rs1049353 and rs806377, on the processing of emotional faces in healthy adolescents. They participated in functional magnetic resonance imaging during a Faces Task, watching blocks of video clips with angry and neutral facial expressions, and completed a Morphed Faces Task in the laboratory where they looked at different facial expressions that switched from anger to fear or sadness or from happiness to fear or sadness, and labelled them according to these four emotional expressions. A-allele versus GG-carriers in rs1049353 displayed earlier recognition of facial expressions changing from anger to sadness or fear, but not for expressions changing from happiness to sadness or fear, and higher brain responses to angry, but not neutral, faces in the amygdala and insula. For rs806377 no significant effects emerged. This suggests that rs1049353 is involved in the processing of negative facial expressions with relation to anger in adolescence. These findings add to our understanding of social emotion-related mechanisms in this life period. PMID:26527537

  13. The role of the cannabinoid receptor in adolescents' processing of facial expressions.

    PubMed

    Ewald, Anais; Becker, Susanne; Heinrich, Angela; Banaschewski, Tobias; Poustka, Luise; Bokde, Arun; Büchel, Christian; Bromberg, Uli; Cattrell, Anna; Conrod, Patricia; Desrivières, Sylvane; Frouin, Vincent; Papadopoulos-Orfanos, Dimitri; Gallinat, Jürgen; Garavan, Hugh; Heinz, Andreas; Walter, Henrik; Ittermann, Bernd; Gowland, Penny; Paus, Tomáš; Martinot, Jean-Luc; Paillère Martinot, Marie-Laure; Smolka, Michael N; Vetter, Nora; Whelan, Rob; Schumann, Gunter; Flor, Herta; Nees, Frauke

    2016-01-01

    The processing of emotional faces is an important prerequisite for adequate social interactions in daily life, and might thus specifically be altered in adolescence, a period marked by significant changes in social emotional processing. Previous research has shown that the cannabinoid receptor CB1R is associated with longer gaze duration and increased brain responses in the striatum to happy faces in adults, yet, for adolescents, it is not clear whether an association between CBR1 and face processing exists. In the present study we investigated genetic effects of the two CB1R polymorphisms, rs1049353 and rs806377, on the processing of emotional faces in healthy adolescents. They participated in functional magnetic resonance imaging during a Faces Task, watching blocks of video clips with angry and neutral facial expressions, and completed a Morphed Faces Task in the laboratory where they looked at different facial expressions that switched from anger to fear or sadness or from happiness to fear or sadness, and labelled them according to these four emotional expressions. A-allele versus GG-carriers in rs1049353 displayed earlier recognition of facial expressions changing from anger to sadness or fear, but not for expressions changing from happiness to sadness or fear, and higher brain responses to angry, but not neutral, faces in the amygdala and insula. For rs806377 no significant effects emerged. This suggests that rs1049353 is involved in the processing of negative facial expressions with relation to anger in adolescence. These findings add to our understanding of social emotion-related mechanisms in this life period.

  14. Facial expression recognition and model-based regeneration for distance teaching

    NASA Astrophysics Data System (ADS)

    De Silva, Liyanage C.; Vinod, V. V.; Sengupta, Kuntal

    1998-12-01

    This paper presents a novel idea of a visual communication system, which can support distance teaching using a network of computers. Here the author's main focus is to enhance the quality of distance teaching by reducing the barrier between the teacher and the student, which is formed due to the remote connection of the networked participants. The paper presents an effective way of improving teacher-student communication link of an IT (Information Technology) based distance teaching scenario, using facial expression recognition results and face global and local motion detection results of both the teacher and the student. It presents a way of regenerating the facial images for the teacher-student down-link, which can enhance the teachers facial expressions and which also can reduce the network traffic compared to usual video broadcasting scenarios. At the same time, it presents a way of representing a large volume of facial expression data of the whole student population (in the student-teacher up-link). This up-link representation helps the teacher to receive an instant feed back of his talk, as if he was delivering a face to face lecture. In conventional video tele-conferencing type of applications, this task is nearly impossible, due to huge volume of upward network traffic. The authors utilize several of their previous publication results for most of the image processing components needs to be investigated to complete such a system. In addition, some of the remaining system components are covered by several on going work.

  15. The Thatcher illusion reveals orientation dependence in brain regions involved in processing facial expressions.

    PubMed

    Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J

    2014-01-01

    Although the processing of facial identity is known to be sensitive to the orientation of the face, it is less clear whether orientation sensitivity extends to the processing of facial expressions. To address this issue, we used functional MRI (fMRI) to measure the neural response to the Thatcher illusion. This illusion involves a local inversion of the eyes and mouth in a smiling face-when the face is upright, the inverted features make it appear grotesque, but when the face is inverted, the inversion is no longer apparent. Using an fMRI-adaptation paradigm, we found a release from adaptation in the superior temporal sulcus-a region directly linked to the processing of facial expressions-when the images were upright and they changed from a normal to a Thatcherized configuration. However, this release from adaptation was not evident when the faces were inverted. These results show that regions involved in processing facial expressions display a pronounced orientation sensitivity.

  16. Spontaneous facial expression in unscripted social interactions can be measured automatically.

    PubMed

    Girard, Jeffrey M; Cohn, Jeffrey F; Jeni, Laszlo A; Sayette, Michael A; De la Torre, Fernando

    2015-12-01

    Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.

  17. Inducing a concurrent motor load reduces categorization precision for facial expressions.

    PubMed

    Ipser, Alberta; Cook, Richard

    2016-05-01

    Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base.

  18. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study.

    PubMed

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    "Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters. PMID:26528206

  19. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study.

    PubMed

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    "Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.

  20. Dissimilar processing of emotional facial expressions in human and monkey temporal cortex.

    PubMed

    Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2013-02-01

    Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2×2×2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions.

  1. Facial Muscle Coordination in Monkeys During Rhythmic Facial Expressions and Ingestive Movements

    PubMed Central

    Shepherd, Stephen V.; Lanzilotto, Marco; Ghazanfar, Asif A.

    2012-01-01

    Evolutionary hypotheses regarding the origins of communication signals generally, and primate orofacial communication signals in particular, suggest that these signals derive by ritualization of noncommunicative behaviors, notably including ingestive behaviors such as chewing and nursing. These theories are appealing in part because of the prominent periodicities in both types of behavior. Despite their intuitive appeal, however, there are little or no data with which to evaluate these theories because the coordination of muscles innervated by the facial nucleus has not been carefully compared between communicative and ingestive movements. Such data are especially crucial for reconciling neurophysiological assumptions regarding facial motor control in communication and ingestion. We here address this gap by contrasting the coordination of facial muscles during different types of rhythmic orofacial behavior in macaque monkeys, finding that the perioral muscles innervated by the facial nucleus are rhythmically coordinated during lipsmacks and that this coordination appears distinct from that observed during ingestion. PMID:22553017

  2. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. PMID:25872024

  3. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots.

  4. Looking for trouble: revenge-planning and preattentive vigilance for angry facial expressions.

    PubMed

    Crowe, Sarah E; Wilkowski, Benjamin M

    2013-08-01

    Revenge-planning refers to individual differences in the tendency to actively seek out hostile confrontations with others. Building on past theory, we hypothesized that revenge-planning would be related to preattentive vigilance for angry facial expressions. By being vigilant for such expressions, individuals could more readily notice and prepare to confront social challenges. We conducted 2 studies to test this prediction. Across studies, results indicated that participants high in revenge-planning had significantly longer color-naming latencies for masked angry expressions presented in a subliminal Stroop task, regardless of whether the expression was presented inside or outside participants' attentional focus. This phenomenon was specific to revenge-planning and did not extend to the related construct of angry rumination. Such results suggest that preattentive vigilance for angry expressions supports a confrontational social style in which a person actively seeks out hostile social encounters. PMID:23527511

  5. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2004-12-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  6. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  7. Spatial and temporal analysis of gene expression during growth and fusion of the mouse facial prominences.

    PubMed

    Feng, Weiguo; Leach, Sonia M; Tipney, Hannah; Phang, Tzulip; Geraci, Mark; Spritz, Richard A; Hunter, Lawrence E; Williams, Trevor

    2009-12-16

    Orofacial malformations resulting from genetic and/or environmental causes are frequent human birth defects yet their etiology is often unclear because of insufficient information concerning the molecular, cellular and morphogenetic processes responsible for normal facial development. We have, therefore, derived a comprehensive expression dataset for mouse orofacial development, interrogating three distinct regions - the mandibular, maxillary and frontonasal prominences. To capture the dynamic changes in the transcriptome during face formation, we sampled five time points between E10.5-E12.5, spanning the developmental period from establishment of the prominences to their fusion to form the mature facial platform. Seven independent biological replicates were used for each sample ensuring robustness and quality of the dataset. Here, we provide a general overview of the dataset, characterizing aspects of gene expression changes at both the spatial and temporal level. Considerable coordinate regulation occurs across the three prominences during this period of facial growth and morphogenesis, with a switch from expression of genes involved in cell proliferation to those associated with differentiation. An accompanying shift in the expression of polycomb and trithorax genes presumably maintains appropriate patterns of gene expression in precursor or differentiated cells, respectively. Superimposed on the many coordinated changes are prominence-specific differences in the expression of genes encoding transcription factors, extracellular matrix components, and signaling molecules. Thus, the elaboration of each prominence will be driven by particular combinations of transcription factors coupled with specific cell:cell and cell:matrix interactions. The dataset also reveals several prominence-specific genes not previously associated with orofacial development, a subset of which we externally validate. Several of these latter genes are components of bidirectional

  8. Fetal facial expression in response to intravaginal music emission

    PubMed Central

    García-Faura, Álex; Prats-Galino, Alberto

    2015-01-01

    This study compared fetal response to musical stimuli applied intravaginally (intravaginal music [IVM]) with application via emitters placed on the mother’s abdomen (abdominal music [ABM]). Responses were quantified by recording facial movements identified on 3D/4D ultrasound. One hundred and six normal pregnancies between 14 and 39 weeks of gestation were randomized to 3D/4D ultrasound with: (a) ABM with standard headphones (flute monody at 98.6 dB); (b) IVM with a specially designed device emitting the same monody at 53.7 dB; or (c) intravaginal vibration (IVV; 125 Hz) at 68 dB with the same device. Facial movements were quantified at baseline, during stimulation, and for 5 minutes after stimulation was discontinued. In fetuses at a gestational age of >16 weeks, IVM-elicited mouthing (MT) and tongue expulsion (TE) in 86.7% and 46.6% of fetuses, respectively, with significant differences when compared with ABM and IVV (p = 0.002 and p = 0.004, respectively). There were no changes from baseline in ABM and IVV. TE occurred ≥5 times in 5 minutes in 13.3% with IVM. IVM was related with higher occurrence of MT (odds ratio = 10.980; 95% confidence interval = 3.105–47.546) and TE (odds ratio = 10.943; 95% confidence interval = 2.568–77.037). The frequency of TE with IVM increased significantly with gestational age (p = 0.024). Fetuses at 16–39 weeks of gestation respond to intravaginally emitted music with repetitive MT and TE movements not observed with ABM or IVV. Our findings suggest that neural pathways participating in the auditory–motor system are developed as early as gestational week 16. These findings might contribute to diagnostic methods for prenatal hearing screening, and research into fetal neurological stimulation. PMID:26539240

  9. Why do fearful facial expressions elicit behavioral approach? Evidence from a combined approach-avoidance implicit association test.

    PubMed

    Hammer, Jennifer L; Marsh, Abigail A

    2015-04-01

    Despite communicating a "negative" emotion, fearful facial expressions predominantly elicit behavioral approach from perceivers. It has been hypothesized that this seemingly paradoxical effect may occur due to fearful expressions' resemblance to vulnerable, infantile faces. However, this hypothesis has not yet been tested. We used a combined approach-avoidance/implicit association test (IAT) to test this hypothesis. Participants completed an approach-avoidance lever task during which they responded to fearful and angry facial expressions as well as neutral infant and adult faces presented in an IAT format. Results demonstrated an implicit association between fearful facial expressions and infant faces and showed that both fearful expressions and infant faces primarily elicit behavioral approach. The dominance of approach responses to both fearful expressions and infant faces decreased as a function of psychopathic personality traits. Results suggest that the prosocial responses to fearful expressions observed in most individuals may stem from their associations with infantile faces. (PsycINFO Database Record

  10. The Thatcher Illusion Reveals Orientation Dependence in Brain Regions Involved in Processing Facial Expressions

    PubMed Central

    Psalta, Lilia; Young, Andrew W.; Thompson, Peter; Andrews, Timothy J.

    2015-01-01

    Although the processing of facial identity is known to be sensitive to the orientation of the face, it is less clear whether orientation sensitivity extends to the processing of facial expressions. To address this issue, we used functional MRI (fMRI) to measure the neural response to the Thatcher illusion. This illusion involves a local inversion of the eyes and mouth in a smiling face—when the face is upright, the inverted features make it appear grotesque, but when the face is inverted, the inversion is no longer apparent. Using an fMRI-adaptation paradigm, we found a release from adaptation in the superior temporal sulcus—a region directly linked to the processing of facial expressions—when the images were upright and they changed from a normal to a Thatcherized configuration. However, this release from adaptation was not evident when the faces were inverted. These results show that regions involved in processing facial expressions display a pronounced orientation sensitivity. PMID:24264941

  11. Impaired recognition of facial expressions of anger in Parkinson's disease patients acutely withdrawn from dopamine replacement therapy.

    PubMed

    Lawrence, Andrew D; Goerendt, Ines K; Brooks, David J

    2007-01-01

    We have previously reported that acute dopaminergic blockade in healthy volunteers results in a transient disruption of the recognition of facial expressions of anger, whilst leaving intact the recognition of other facial expressions (including fear and disgust) and facial identity processing. Parkinson's disease (PD) is characterised by cell loss in dopaminergic neuronal populations, and hence we predicted that PD would be associated with impaired anger recognition. We reasoned that treatment with dopamine replacement therapy (DRT) could mask any deficit present in PD, and therefore studied facial expression recognition in a group of PD patients transiently withdrawn from DRT. Seventeen PD patients were compared to 21 age- and IQ-matched controls on the Ekman 60 task, which required the forced-choice labelling of 10 exemplars of each of six facial expressions (anger, disgust, fear, sadness, happiness, surprise). In line with our predictions, PD patients showed a selective impairment in the recognition of facial expressions of anger. This deficit was not related to the PD patients' performance on the Benton unfamiliar-face matching task, which was normal, nor was the deficit related to overall disease severity, or to depression symptoms. However, as predicted by simulation theories, impaired anger recognition in PD was related to reduced levels of the anger-linked temperament trait, exploratory excitability. The results extend our previous findings of a role for dopamine in the processing of facial expressions of anger, and demonstrate the power of adopting a phylogenetic, comparative perspective on emotions. PMID:16780901

  12. Expression-invariant three-dimensional face reconstruction from a single image by facial expression generic elastic models

    NASA Astrophysics Data System (ADS)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2014-09-01

    An efficient method for expression-invariant three-dimensional (3-D) face reconstruction from a frontal face image with a variety of facial expressions (FE) using the FE generic elastic model (GEM) is proposed. Three generic models are employed for FE modeling in the generic elastic model (GEM) framework, which are combined based on the similarity distance around the lips. Exclusively, FE-GEM demonstrated that it is more precisely able to estimate a 3-D model of a frontal face, attaining a more robust and better quality 3-D face reconstruction under a variety of FEs compared to the original GEM approach. It is tested on an available 3-D face database and its accuracy and robustness are demonstrated compared to the GEM approach under a variety of FEs. Also, the FE-GEM method is tested on available two-dimensional face databases and a new synthesized pose is generated from gallery images for handling pose variations in face recognition.

  13. Emotional facial expressions differentially influence predictions and performance for face recognition.

    PubMed

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  14. Emotional facial expressions differentially influence predictions and performance for face recognition.

    PubMed

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion. PMID:22712473

  15. Memory for facial expression is influenced by the background music playing during study

    PubMed Central

    Woloszyn, Michael R.; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities. PMID:22956988

  16. Towards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches.

    PubMed

    Saneiro, Mar; Santos, Olga C; Salmeron-Majadas, Sergio; Boticario, Jesus G

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support. PMID:24892055

  17. Towards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches.

    PubMed

    Saneiro, Mar; Santos, Olga C; Salmeron-Majadas, Sergio; Boticario, Jesus G

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support.

  18. Judging emotional congruency: Explicit attention to situational context modulates processing of facial expressions of emotion.

    PubMed

    Diéguez-Risco, Teresa; Aguado, Luis; Albert, Jacobo; Hinojosa, José Antonio

    2015-12-01

    The influence of explicit evaluative processes on the contextual integration of facial expressions of emotion was studied in a procedure that required the participants to judge the congruency of happy and angry faces with preceding sentences describing emotion-inducing situations. Judgments were faster on congruent trials in the case of happy faces and on incongruent trials in the case of angry faces. At the electrophysiological level, a congruency effect was observed in the face-sensitive N170 component that showed larger amplitudes on incongruent trials. An interactive effect of congruency and emotion appeared on the LPP (late positive potential), with larger amplitudes in response to happy faces that followed anger-inducing situations. These results show that the deliberate intention to judge the contextual congruency of facial expressions influences not only processes involved in affective evaluation such as those indexed by the LPP but also earlier processing stages that are involved in face perception.

  19. Perceptions of emotion from facial expressions are not culturally universal: evidence from a remote culture.

    PubMed

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-04-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to U.S. emotion concepts embedded in experiments. Because such cues are standard features in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the United States and the Himba ethnic group from the Keunene region of northwestern Namibia sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed "universal" pattern, whereas U.S. participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed "universal" pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts.

  20. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    PubMed Central

    Saneiro, Mar; Salmeron-Majadas, Sergio

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support. PMID:24892055

  1. Test battery for measuring the perception and recognition of facial expressions of emotion

    PubMed Central

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  2. Children's scripts for social emotions: causes and consequences are more central than are facial expressions.

    PubMed

    Widen, Sherri C; Russell, James A

    2010-09-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study investigated whether a facial expression or a brief story describing an emotion's cause and consequence was the stronger cue to basic-level and social emotions. Children (N = 120, 4-10 years) freely labelled the emotion implied by faces and, separately, stories for six basic-level emotions (happiness, anger, fear, surprise, disgust, and contempt) and three social emotions (embarrassment, compassion, and shame). Cause-and-consequence stories were the stronger cue overall, especially for fear, disgust, and social emotions. Faces were the stronger cue only for surprise. Younger children assimilated social emotions into basic-level emotion categories (sadness and anger); older children differentiated them. Differentiation occurred earlier for stories than for faces.

  3. Memory for facial expression is influenced by the background music playing during study.

    PubMed

    Woloszyn, Michael R; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities.

  4. Perceptions of Emotion from Facial Expressions are Not Culturally Universal: Evidence from a Remote Culture

    PubMed Central

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-01-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to US emotion concepts embedded in experiments. Since such cues are standard feature in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the US and the Himba ethnic group sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed “universal” pattern, whereas US participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed “universal” pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts. PMID:24708506

  5. Emotion recognition from facial expressions in a temporal lobe epileptic patient with ictal fear.

    PubMed

    Yamada, Makiko; Murai, Toshiya; Sato, Wataru; Namiki, Chihiro; Miyamoto, Toru; Ohigashi, Yoshitaka

    2005-01-01

    Ictal fear (IF) is an affective aura observed in patients with temporal lobe epilepsy. It has been suggested that the amygdala, a region implicated in emotion processing, is involved in generating IF. Several studies have reported that the patients with IF have disturbances in emotional experience, but there has been no testing of the emotional recognition in those patients. In this report, emotion recognition from facial expressions was investigated in a patient with IF. The patient suffered from IF due to temporal lobe epilepsy, and underwent hippocampectomy surgery which completely suppressed IF. We examined the patient before and after surgery. Before surgery, the patient tended to attach enhanced fear, sadness, and anger to various facial expressions. After surgery, such biases disappeared. As an underlying mechanism of the pre-surgical skewed perception of emotional stimuli, the abnormal epileptogenic circuits involving a hypersensitive amygdala possibly due to the kindling mechanism were suggested.

  6. Age-Related Response Bias in the Decoding of Sad Facial Expressions

    PubMed Central

    Fölster, Mara; Hess, Ursula; Hühnel, Isabell; Werheid, Katja

    2015-01-01

    Recent studies have found that age is negatively associated with the accuracy of decoding emotional facial expressions; this effect of age was found for actors as well as for raters. Given that motivational differences and stereotypes may bias the attribution of emotion, the aim of the present study was to explore whether these age effects are due to response bias, that is, the unbalanced use of response categories. Thirty younger raters (19–30 years) and thirty older raters (65–81 years) viewed video clips of younger and older actors representing the same age ranges, and decoded their facial expressions. We computed both raw hit rates and bias-corrected hit rates to assess the influence of potential age-related response bias on decoding accuracy. Whereas raw hit rates indicated significant effects of both the actors’ and the raters’ ages on decoding accuracy for sadness, these age effects were no longer significant when response bias was corrected. Our results suggest that age effects on the accuracy of decoding facial expressions may be due, at least in part, to age-related response bias. PMID:26516920

  7. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children.

    PubMed

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I; Carrubba, Stefano

    2015-05-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children's performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction 'gender x face region' only for anger and neutral emotions. PMID:27247651

  8. Time pressure inhibits dynamic advantage in the classification of facial expressions of emotion.

    PubMed

    Jiang, Zhongqing; Li, Wenhui; Recio, Guillermo; Liu, Ying; Luo, Wenbo; Zhang, Doufei; Sun, Dan

    2014-01-01

    Recent studies suggest an advantage in the recognition of dynamic over static facial expressions of emotion. Here, we explored the differences in the processing of static and dynamic faces under condition of time pressure. A group of 18 participants classified static and dynamic facial expressions (angry, happy, and neutral). In order to increase the goal-directed attention, instructions emphasized speed and announced time pressure in the interval for the response (maximal 600 ms). Participants responded faster and more accurately in the static than in the dynamic condition. Event-related potentials (ERPs) showed larger amplitude of the P1 (90-130 ms) and LPC (300-600 ms) components for dynamic relative to static stimuli, indicating enhanced early visual processing and emotional attention. On the other hand, the N170 was more negative in static relative to dynamic faces, suggesting better structural encoding for static faces under time pressure. The present study shows some advantages in the processing of static over dynamic facial expressions of emotion when the top-down (goal-driven) attention is strengthened.

  9. Evaluation of pain in rats through facial expression following experimental tooth movement.

    PubMed

    Liao, Lina; Long, Hu; Zhang, Li; Chen, Helin; Zhou, Yang; Ye, Niansong; Lai, Wenli

    2014-04-01

    This study was carried out to evaluate pain in rats by monitoring their facial expressions following experimental tooth movement. Male Sprague-Dawley rats were divided into the following five groups based on the magnitude of orthodontic force applied and administration of analgesics: control; 20 g; 40 g; 80 g; and morphine + 40 g. Closed-coil springs were used to mimic orthodontic forces. The facial expressions of each rat were videotaped, and the resulting rat grimace scale (RGS) coding was employed for pain quantification. The RGS score increased on day 1 but showed no significant change thereafter in the control and 20-g groups. In the 40- and 80-g groups, the RGS scores increased on day 1, peaked on day 3, and started to decrease on day 5. At 14 d, the RGS scores were similar in control and 20-, 40-, and 80-g groups and did not return to baseline. The RGS scores in the morphine + 40-g group were significantly lower than those in the control group. Our results reveal that coding of facial expression is a valid method for evaluation of pain in rats following experimental tooth movement. Inactivated springs (no force) still cause discomfort and result in an increase in the RGS. The threshold force magnitude required to evoke orthodontic pain in rats is between 20 and 40 g.

  10. Videos of conspecifics elicit interactive looking patterns and facial expressions in monkeys.

    PubMed

    Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M

    2011-08-01

    A broader understanding of the neural basis of social behavior in primates requires the use of species-specific stimuli that elicit spontaneous, but reproducible and tractable behaviors. In this context of natural behaviors, individual variation can further inform about the factors that influence social interactions. To approximate natural social interactions similar to those documented by field studies, we used unedited video footage to induce in viewer monkeys spontaneous facial expressions and looking patterns in the laboratory setting. Three adult male monkeys (Macaca mulatta), previously behaviorally and genetically (5-HTTLPR) characterized, were monitored while they watched 10 s video segments depicting unfamiliar monkeys (movie monkeys) displaying affiliative, neutral, and aggressive behaviors. The gaze and head orientation of the movie monkeys alternated between "averted" and "directed" at the viewer. The viewers were not reinforced for watching the movies, thus their looking patterns indicated their interest and social engagement with the stimuli. The behavior of the movie monkey accounted for differences in the looking patterns and facial expressions displayed by the viewers. We also found multiple significant differences in the behavior of the viewers that correlated with their interest in these stimuli. These socially relevant dynamic stimuli elicited spontaneous social behaviors, such as eye-contact induced reciprocation of facial expression, gaze aversion, and gaze following, that were previously not observed in response to static images. This approach opens a unique opportunity to understanding the mechanisms that trigger spontaneous social behaviors in humans and nonhuman primates.

  11. Recognition of facial expressions by alcoholic patients: a systematic literature review

    PubMed Central

    Donadon, Mariana Fortunata; Osório, Flávia de Lima

    2014-01-01

    Background Alcohol abuse and dependence can cause a wide variety of cognitive, psychomotor, and visual-spatial deficits. It is questionable whether this condition is associated with impairments in the recognition of affective and/or emotional information. Such impairments may promote deficits in social cognition and, consequently, in the adaptation and interaction of alcohol abusers with their social environment. The aim of this systematic review was to systematize the literature on alcoholics’ recognition of basic facial expressions in terms of the following outcome variables: accuracy, emotional intensity, and latency time. Methods A systematic literature search in the PsycINFO, PubMed, and SciELO electronic databases, with no restrictions regarding publication year, was employed as the study methodology. Results The findings of some studies indicate that alcoholics have greater impairment in facial expression recognition tasks, while others could not differentiate the clinical group from controls. However, there was a trend toward greater deficits in alcoholics. Alcoholics displayed less accuracy in recognition of sadness and disgust and required greater emotional intensity to judge facial expressions corresponding to fear and anger. Conclusion The current study was only able to identify trends in the chosen outcome variables. Future studies that aim to provide more precise evidence for the potential influence of alcohol on social cognition are needed. PMID:25228806

  12. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  13. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children.

    PubMed

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I; Carrubba, Stefano

    2015-05-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children's performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction 'gender x face region' only for anger and neutral emotions.

  14. Disgust-specific impairment of facial expression recognition in Parkinson's disease.

    PubMed

    Suzuki, Atsunobu; Hoshino, Takahiro; Shigemasu, Kazuo; Kawamura, Mitsuru

    2006-03-01

    There is contradictory evidence regarding whether the impairments of the recognition of emotional facial expressions in Parkinson's disease are specific to certain emotions such as disgust and fear. Generally, neurological case reports on emotion-specific impairments have been suspected of being confounded with the factor of task difficulty. Using a refined assessment method in which the difficulty factors were controlled by means of mixed facial expressions and item response theory, we attempted to clarify whether Parkinson's disease disproportionately impaired the recognition of specific emotions. We studied 14 patients with Parkinson's disease and 39 healthy controls who were matched in terms of gender, age, years of education and intelligence quotient. Whereas the refined method revealed that the patients with Parkinson's disease displayed significantly lower scores in disgust recognition alone, conventional methods failed to detect this impairment. In addition, control measures including face recognition abilities did not statistically explain the impairment observed in the patients. The results indicate that Parkinson's disease can indeed selectively impair the recognition of facial expressions of disgust; this provides concrete evidence for emotion-specific impairments that sufficiently withstands criticisms regarding the difficulty artefacts. Furthermore, the results support the proposed role of the basal ganglia-insula system in disgust recognition. This study effectively demonstrates the benefits of refining neuropsychological assessment by taking advantage of the modern psychometric theory. PMID:16415306

  15. Categorical perception of emotional facial expressions does not require lexical categories.

    PubMed

    Sauter, Disa A; LeGuen, Oliver; Haun, Daniel B M

    2011-12-01

    Does our perception of others' emotional signals depend on the language we speak or is our perception the same regardless of language and culture? It is well established that human emotional facial expressions are perceived categorically by viewers, but whether this is driven by perceptual or linguistic mechanisms is debated. We report an investigation into the perception of emotional facial expressions, comparing German speakers to native speakers of Yucatec Maya, a language with no lexical labels that distinguish disgust from anger. In a free naming task, speakers of German, but not Yucatec Maya, made lexical distinctions between disgust and anger. However, in a delayed match-to-sample task, both groups perceived emotional facial expressions of these and other emotions categorically. The magnitude of this effect was equivalent across the language groups, as well as across emotion continua with and without lexical distinctions. Our results show that the perception of affective signals is not driven by lexical labels, instead lending support to accounts of emotions as a set of biologically evolved mechanisms.

  16. Validation of the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions

    PubMed Central

    Wingenbach, Tanja S. H.

    2016-01-01

    Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author

  17. Children with mixed language disorder do not discriminate accurately facial identity when expressions change.

    PubMed

    Robel, Laurence; Vaivre-Douret, Laurence; Neveu, Xavier; Piana, Hélène; Perier, Antoine; Falissard, Bruno; Golse, Bernard

    2008-12-01

    We investigated the recognition of pairs of faces (same or different facial identities and expressions) in two groups of 14 children aged 6-10 years, with either an expressive language disorder (ELD), or a mixed language disorder (MLD), and two groups of 14 matched healthy controls. When looking at their global performances, children with either expressive (ELD) or MLD have few differences from controls in either face or emotional recognition. At contrary, we found that children with MLD, but not those with ELD, take identical faces to be different if their expressions change. Since children with mixed language disorders are socially more impaired than children with ELD, we think that these features may partly underpin the social difficulties of these children.

  18. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  19. Diminished facial emotion expression and associated clinical characteristics in Anorexia Nervosa.

    PubMed

    Lang, Katie; Larsson, Emma E C; Mavromara, Liza; Simic, Mima; Treasure, Janet; Tchanturia, Kate

    2016-02-28

    This study aimed to investigate emotion expression in a large group of children, adolescents and adults with Anorexia Nervosa (AN), and investigate the associated clinical correlates. One hundred and forty-one participants (AN=66, HC= 75) were recruited and positive and negative film clips were used to elicit emotion expressions. The Facial Activation Coding system (FACES) was used to code emotion expression. Subjective ratings of emotion were collected. Individuals with AN displayed less positive emotions during the positive film clip compared to healthy controls (HC). There was no significant difference between the groups on the Positive and Negative Affect Scale (PANAS). The AN group displayed emotional incongruence (reporting a different emotion to what would be expected given the stimuli, with limited facial affect to signal the emotion experienced), whereby they reported feeling significantly higher rates of negative emotion during the positive clip. There were no differences in emotion expression between the groups during the negative film clip. Despite this individuals with AN reported feeling significantly higher levels of negative emotions during the negative clip. Diminished positive emotion expression was associated with more severe clinical symptoms, which could suggest that these individuals represent a group with serious social difficulties, which may require specific attention in treatment.

  20. The Odor Context Facilitates the Perception of Low-Intensity Facial Expressions of Emotion.

    PubMed

    Leleu, Arnaud; Demily, Caroline; Franck, Nicolas; Durand, Karine; Schaal, Benoist; Baudouin, Jean-Yves

    2015-01-01

    It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information. PMID:26390036

  1. The Odor Context Facilitates the Perception of Low-Intensity Facial Expressions of Emotion

    PubMed Central

    Leleu, Arnaud; Demily, Caroline; Franck, Nicolas; Durand, Karine; Schaal, Benoist; Baudouin, Jean-Yves

    2015-01-01

    It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information. PMID:26390036

  2. Suboptimal Exposure to Facial Expressions When Viewing Video Messages From a Small Screen: Effects on Emotion, Attention, and Memory

    ERIC Educational Resources Information Center

    Ravaja, Niklas; Kallinen, Kari; Saari, Timo; Keltikangas-Jarvinen, Liisa

    2004-01-01

    The authors examined the effects of suboptimally presented facial expressions on emotional and attentional responses and memory among 39 young adults viewing video (business news) messages from a small screen. Facial electromyography (EMG) and respiratory sinus arrhythmia were used as physiological measures of emotion and attention, respectively.…

  3. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    ERIC Educational Resources Information Center

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  4. Why Do Fearful Facial Expressions Elicit Behavioral Approach? Evidence From a Combined Approach-Avoidance Implicit Association Test

    PubMed Central

    Hammer, Jennifer L.; Marsh, Abigail A.

    2015-01-01

    Despite communicating a “negative” emotion, fearful facial expressions predominantly elicit behavioral approach from perceivers. It has been hypothesized that this seemingly paradoxical effect may occur due to fearful expressions’ resemblance to vulnerable, infantile faces. However, this hypothesis has not yet been tested. We used a combined approach-avoidance/implicit association test (IAT) to test this hypothesis. Participants completed an approach-avoidance lever task during which they responded to fearful and angry facial expressions as well as neutral infant and adult faces presented in an IAT format. Results demonstrated an implicit association between fearful facial expressions and infant faces and showed that both fearful expressions and infant faces primarily elicit behavioral approach. The dominance of approach responses to both fearful expressions and infant faces decreased as a function of psychopathic personality traits. Results suggest that the prosocial responses to fearful expressions observed in most individuals may stem from their associations with infantile faces. PMID:25603135

  5. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    PubMed

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently.

  6. Facial Expression Aftereffect Revealed by Adaption to Emotion-Invisible Dynamic Bubbled Faces

    PubMed Central

    Luo, Chengwen; Wang, Qingyun; Schyns, Philippe G.; Kingdom, Frederick A. A.; Xu, Hong

    2015-01-01

    Visual adaptation is a powerful tool to probe the short-term plasticity of the visual system. Adapting to local features such as the oriented lines can distort our judgment of subsequently presented lines, the tilt aftereffect. The tilt aftereffect is believed to be processed at the low-level of the visual cortex, such as V1. Adaptation to faces, on the other hand, can produce significant aftereffects in high-level traits such as identity, expression, and ethnicity. However, whether face adaptation necessitate awareness of face features is debatable. In the current study, we investigated whether facial expression aftereffects (FEAE) can be generated by partially visible faces. We first generated partially visible faces using the bubbles technique, in which the face was seen through randomly positioned circular apertures, and selected the bubbled faces for which the subjects were unable to identify happy or sad expressions. When the subjects adapted to static displays of these partial faces, no significant FEAE was found. However, when the subjects adapted to a dynamic video display of a series of different partial faces, a significant FEAE was observed. In both conditions, subjects could not identify facial expression in the individual adapting faces. These results suggest that our visual system is able to integrate unrecognizable partial faces over a short period of time and that the integrated percept affects our judgment on subsequently presented faces. We conclude that FEAE can be generated by partial face with little facial expression cues, implying that our cognitive system fills-in the missing parts during adaptation, or the subcortical structures are activated by the bubbled faces without conscious recognition of emotion during adaptation. PMID:26717572

  7. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    PubMed Central

    Ghimire, Deepak; Lee, Joonwhoan

    2013-01-01

    Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+) facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively. PMID:23771158

  8. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks.

    PubMed

    Meaux, Emilie; Vuilleumier, Patrik

    2016-11-01

    The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by

  9. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks.

    PubMed

    Meaux, Emilie; Vuilleumier, Patrik

    2016-11-01

    The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by

  10. Inducing a Concurrent Motor Load Reduces Categorization Precision for Facial Expressions

    PubMed Central

    2015-01-01

    Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base. PMID:26618622

  11. Facial expression of fear in the context of human ethology: Recognition advantage in the perception of male faces.

    PubMed

    Trnka, Radek; Tavel, Peter; Tavel, Peter; Hasto, Jozef

    2015-01-01

    Facial expression is one of the core issues in the ethological approach to the study of human behaviour. This study discusses sex-specific aspects of the recognition of the facial expression of fear using results from our previously published experimental study. We conducted an experiment in which 201 participants judged seven different facial expressions: anger, contempt, disgust, fear, happiness, sadness and surprise (Trnka et al. 2007). Participants were able to recognize the facial expression of fear significantly better on a male face than on a female face. Females also recognized fear generally better than males. The present study provides a new interpretation of this sex difference in the recognition of fear. We interpret these results within the paradigm of human ethology, taking into account the adaptive function of the facial expression of fear. We argue that better detection of fear might be crucial for females under a situation of serious danger in groups of early hominids. The crucial role of females in nurturing and protecting offspring was fundamental for the reproductive potential of the group. A clear decoding of this alarm signal might thus have enabled the timely preparation of females for escape or defence to protect their health for successful reproduction. Further, it is likely that males played the role of guardians of social groups and that they were responsible for effective warnings of the group under situations of serious danger. This may explain why the facial expression of fear is better recognizable on the male face than on the female face.

  12. The visual discrimination of negative facial expressions by younger and older adults.

    PubMed

    Mienaltowski, Andrew; Johnson, Ellen R; Wittman, Rebecca; Wilson, Anne-Taylor; Sturycz, Cassandra; Norman, J Farley

    2013-04-01

    Previous research has demonstrated that older adults are not as accurate as younger adults at perceiving negative emotions in facial expressions. These studies rely on emotion recognition tasks that involve choosing between many alternatives, creating the possibility that age differences emerge for cognitive rather than perceptual reasons. In the present study, an emotion discrimination task was used to investigate younger and older adults' ability to visually discriminate between negative emotional facial expressions (anger, sadness, fear, and disgust) at low (40%) and high (80%) expressive intensity. Participants completed trials blocked by pairs of emotions. Discrimination ability was quantified from the participants' responses using signal detection measures. In general, the results indicated that older adults had more difficulty discriminating between low intensity expressions of negative emotions than did younger adults. However, younger and older adults did not differ when discriminating between anger and sadness. These findings demonstrate that age differences in visual emotion discrimination emerge when signal detection measures are used but that these differences are not uniform and occur only in specific contexts.

  13. ANS responses and facial expressions differentiate between the taste of commercial breakfast drinks.

    PubMed

    de Wijk, René A; He, Wei; Mensink, Manon G J; Verhoeven, Rob H G; de Graaf, Cees

    2014-01-01

    The high failure rate of new market introductions, despite initial successful testing with traditional sensory and consumer tests, necessitates the development of other tests. This study explored the ability of selected physiological and behavioral measures of the autonomic nervous system (ANS) to distinguish between repeated exposures to foods from a single category (breakfast drinks) and with similar liking ratings. In this within-subject study 19 healthy young adults sipped from five breakfast drinks, each presented five times, while ANS responses (heart rate, skin conductance response and skin temperature), facial expressions, liking, and intensities were recorded. The results showed that liking was associated with increased heart rate and skin temperature, and more neutral facial expressions. Intensity was associated with reduced heart rate and skin temperature, more neutral expressions and more negative expressions of sadness, anger and surprise. Strongest associations with liking were found after 1 second of tasting, whereas strongest associations with intensity were found after 2 seconds of tasting. Future studies should verify the contribution of the additional information to the prediction of market success. PMID:24714107

  14. Facial expressions as conditioned stimuli for electrodermal responses: a case of "preparedness"?

    PubMed

    Ohman, A; Dimberg, U

    1978-11-01

    Converging data suggest that human facial behavior has an evolutionary basis. Combining these data with Seligman's preparedness theory, it was predicted that facial expressions of anger should be more readily associated with aversive events than should expressions of happiness. Two experiments involving differential electrodermal conditioning to pictures of faces, with electric shock as the unconditioned stimulus, were performed. In the first experiment, the subjects were exposed to two pictures of the same person, one with an angry and one with a happy expression. For half of the subjects, the shock followed the angry face, and for the other half, it followed the happy face. In the second experiment, three groups of subjects differentiated between pictures of male and female faces, both showing angry, neutral, and happy expressions. Responses to angry conditioned stimuli showed significant resistance to extinction in both experiments, with a larger effect in Experiment 2. Responses to happy or neutral conditioned stimuli, on the other hand, extinguished immediately when the shock was withheld. The results are related to conditioning to phobic stimuli and to the preparedness theory.

  15. ANS Responses and Facial Expressions Differentiate between the Taste of Commercial Breakfast Drinks

    PubMed Central

    de Wijk, René A.; He, Wei; Mensink, Manon G. J.; Verhoeven, Rob H. G.; de Graaf, Cees

    2014-01-01

    The high failure rate of new market introductions, despite initial successful testing with traditional sensory and consumer tests, necessitates the development of other tests. This study explored the ability of selected physiological and behavioral measures of the autonomic nervous system (ANS) to distinguish between repeated exposures to foods from a single category (breakfast drinks) and with similar liking ratings. In this within-subject study 19 healthy young adults sipped from five breakfast drinks, each presented five times, while ANS responses (heart rate, skin conductance response and skin temperature), facial expressions, liking, and intensities were recorded. The results showed that liking was associated with increased heart rate and skin temperature, and more neutral facial expressions. Intensity was associated with reduced heart rate and skin temperature, more neutral expressions and more negative expressions of sadness, anger and surprise. Strongest associations with liking were found after 1 second of tasting, whereas strongest associations with intensity were found after 2 seconds of tasting. Future studies should verify the contribution of the additional information to the prediction of market success. PMID:24714107

  16. The Effect of Secure Attachment State and Infant Facial Expressions on Childless Adults’ Parental Motivation

    PubMed Central

    Ding, Fangyuan; Zhang, Dajun; Cheng, Gang

    2016-01-01

    This study examined the association between infant facial expressions and parental motivation as well as the interaction between attachment state and expressions. Two-hundred eighteen childless adults (Mage = 19.22, 118 males, 100 females) were recruited. Participants completed the Chinese version of the State Adult Attachment Measure and the E-prime test, which comprised three components (a) liking, the specific hedonic experience in reaction to laughing, neutral, and crying infant faces; (b) representational responding, actively seeking infant faces with specific expressions; and (c) evoked responding, actively retaining images of three different infant facial expressions. While the first component refers to the “liking” of infants, the second and third components entail the “wanting” of an infant. Random intercepts multilevel models with emotion nested within participants revealed a significant interaction between secure attachment state and emotion on both liking and representational response. A hierarchical regression analysis was conducted to examine the unique contributions of secure attachment state. Findings demonstrated that, after controlling for sex, anxious, and avoidant, secure attachment state positively predicted parental motivations (liking and wanting) in the neutral and crying conditions, but not the laughing condition. These findings demonstrate the significant role of secure attachment state in parental motivation, specifically when infants display uncertain and negative emotions. PMID:27582724

  17. The Effect of Secure Attachment State and Infant Facial Expressions on Childless Adults' Parental Motivation.

    PubMed

    Ding, Fangyuan; Zhang, Dajun; Cheng, Gang

    2016-01-01

    This study examined the association between infant facial expressions and parental motivation as well as the interaction between attachment state and expressions. Two-hundred eighteen childless adults (M age = 19.22, 118 males, 100 females) were recruited. Participants completed the Chinese version of the State Adult Attachment Measure and the E-prime test, which comprised three components (a) liking, the specific hedonic experience in reaction to laughing, neutral, and crying infant faces; (b) representational responding, actively seeking infant faces with specific expressions; and (c) evoked responding, actively retaining images of three different infant facial expressions. While the first component refers to the "liking" of infants, the second and third components entail the "wanting" of an infant. Random intercepts multilevel models with emotion nested within participants revealed a significant interaction between secure attachment state and emotion on both liking and representational response. A hierarchical regression analysis was conducted to examine the unique contributions of secure attachment state. Findings demonstrated that, after controlling for sex, anxious, and avoidant, secure attachment state positively predicted parental motivations (liking and wanting) in the neutral and crying conditions, but not the laughing condition. These findings demonstrate the significant role of secure attachment state in parental motivation, specifically when infants display uncertain and negative emotions. PMID:27582724

  18. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    ERIC Educational Resources Information Center

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  19. Personality influences the neural responses to viewing facial expressions of emotion.

    PubMed

    Calder, Andrew J; Ewbank, Michael; Passamonti, Luca

    2011-06-12

    Cognitive research has long been aware of the relationship between individual differences in personality and performance on behavioural tasks. However, within the field of cognitive neuroscience, the way in which such differences manifest at a neural level has received relatively little attention. We review recent research addressing the relationship between personality traits and the neural response to viewing facial signals of emotion. In one section, we discuss work demonstrating the relationship between anxiety and the amygdala response to facial signals of threat. A second section considers research showing that individual differences in reward drive (behavioural activation system), a trait linked to aggression, influence the neural responsivity and connectivity between brain regions implicated in aggression when viewing facial signals of anger. Finally, we address recent criticisms of the correlational approach to fMRI analyses and conclude that when used appropriately, analyses examining the relationship between personality and brain activity provide a useful tool for understanding the neural basis of facial expression processing and emotion processing in general.

  20. Personality influences the neural responses to viewing facial expressions of emotion.

    PubMed

    Calder, Andrew J; Ewbank, Michael; Passamonti, Luca

    2011-06-12

    Cognitive research has long been aware of the relationship between individual differences in personality and performance on behavioural tasks. However, within the field of cognitive neuroscience, the way in which such differences manifest at a neural level has received relatively little attention. We review recent research addressing the relationship between personality traits and the neural response to viewing facial signals of emotion. In one section, we discuss work demonstrating the relationship between anxiety and the amygdala response to facial signals of threat. A second section considers research showing that individual differences in reward drive (behavioural activation system), a trait linked to aggression, influence the neural responsivity and connectivity between brain regions implicated in aggression when viewing facial signals of anger. Finally, we address recent criticisms of the correlational approach to fMRI analyses and conclude that when used appropriately, analyses examining the relationship between personality and brain activity provide a useful tool for understanding the neural basis of facial expression processing and emotion processing in general. PMID:21536554

  1. Affective engagement for facial expressions and emotional scenes: The influence of social anxiety

    PubMed Central

    Wangelin, Bethany C.; Bradley, Margaret M.; Kastner, Anna; Lang, Peter J.

    2012-01-01

    Pictures of emotional facial expressions or natural scenes are often used as cues in emotion research. We examined the extent to which these different stimuli engage emotion and attention, and whether the presence of social anxiety symptoms influences responding to facial cues. Sixty participants reporting high or low social anxiety viewed pictures of angry, neutral, and happy faces, as well as violent, neutral, and erotic scenes, while skin conductance and event-related potentials were recorded. Acoustic startle probes were presented throughout picture viewing, and blink magnitude, probe P3 and reaction time to the startle probe also were measured. Results indicated that viewing emotional scenes prompted strong reactions in autonomic, central, and reflex measures, whereas pictures of faces were generally weak elicitors of measurable emotional response. However, higher social anxiety was associated with modest electrodermal changes when viewing angry faces and mild startle potentiation when viewing either angry or smiling faces, compared to neutral. Taken together, pictures of facial expressions do not strongly engage fundamental affective reactions, but these cues appeared to be effective in distinguishing between high and low social anxiety participants, supporting their use in anxiety research. PMID:22643041

  2. Working memory and the identification of facial expression in patients with left frontal glioma.

    PubMed

    Mu, Yong-Gao; Huang, Ling-Juan; Li, Shi-Yun; Ke, Chao; Chen, Yu; Jin, Yu; Chen, Zhong-Ping

    2012-09-01

    Patients with brain tumors may have cognitive dysfunctions including memory deterioration, such as working memory, that affect quality of life. This study was to explore the presence of defects in working memory and the identification of facial expressions in patients with left frontal glioma. This case-control study recruited 11 matched pairs of patients and healthy control subjects (mean age ± standard deviation, 37.00 ± 10.96 years vs 36.73 ± 11.20 years; 7 male and 4 female) from March through December 2011. The psychological tests contained tests that estimate verbal/visual-spatial working memory, executive function, and the identification of facial expressions. According to the paired samples analysis, there were no differences in the anxiety and depression scores or in the intelligence quotients between the 2 groups (P > .05). All indices of the Digit Span Test were significantly worse in patients than in control subjects (P < .05), but the Tapping Test scores did not differ between patient and control groups. Of all 7 Wisconsin Card Sorting Test (WCST) indexes, only the Preservative Response was significantly different between patients and control subjects (P < .05). Patients were significantly less accurate in detecting angry facial expressions than were control subjects (30.3% vs 57.6%; P < .05) but showed no deficits in the identification of other expressions. The backward indexes of the Digit Span Test were associated with emotion scores and tumor size and grade (P < .05). Patients with left frontal glioma had deficits in verbal working memory and the ability to identify anger. These may have resulted from damage to functional frontal cortex regions, in which roles in these 2 capabilities have not been confirmed. However, verbal working memory performance might be affected by emotional and tumor-related factors. PMID:23095835

  3. Perceiving emotions: Cueing social categorization processes and attentional control through facial expressions.

    PubMed

    Cañadas, Elena; Lupiáñez, Juan; Kawakami, Kerry; Niedenthal, Paula M; Rodríguez-Bailón, Rosa

    2016-09-01

    Individuals spontaneously categorise other people on the basis of their gender, ethnicity and age. But what about the emotions they express? In two studies we tested the hypothesis that facial expressions are similar to other social categories in that they can function as contextual cues to control attention. In Experiment 1 we associated expressions of anger and happiness with specific proportions of congruent/incongruent flanker trials. We also created consistent and inconsistent category members within each of these two general contexts. The results demonstrated that participants exhibited a larger congruency effect when presented with faces in the emotional group associated with a high proportion of congruent trials. Notably, this effect transferred to inconsistent members of the group. In Experiment 2 we replicated the effects with faces depicting true and false smiles. Together these findings provide consistent evidence that individuals spontaneously utilise emotions to categorise others and that such categories determine the allocation of attentional control.

  4. Body cues, not facial expressions, discriminate between intense positive and negative emotions.

    PubMed

    Aviezer, Hillel; Trope, Yaacov; Todorov, Alexander

    2012-11-30

    The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion, positive and negative situations were successfully discriminated from isolated bodies but not faces. Nevertheless, viewers perceived illusory positivity or negativity in the nondiagnostic faces when seen with bodies. To reveal the underlying mechanisms, we created compounds of intense negative faces combined with positive bodies, and vice versa. Perceived affect and mimicry of the faces shifted systematically as a function of their contextual body emotion. These findings challenge standard models of emotion expression and highlight the role of the body in expressing and perceiving emotions.

  5. Facial expressions of emotion (KDEF): identification under different display-duration conditions.

    PubMed

    Calvo, Manuel G; Lundqvist, Daniel

    2008-02-01

    Participants judged which of seven facial expressions (neutrality, happiness, anger, sadness, surprise, fear, and disgust) were displayed by a set of 280 faces corresponding to 20 female and 20 male models of the Karolinska Directed Emotional Faces database (Lundqvist, Flykt, & Ohman, 1998). Each face was presented under free-viewing conditions (to 63 participants) and also for 25, 50, 100, 250, and 500 msec (to 160 participants), to examine identification thresholds. Measures of identification accuracy, types of errors, and reaction times were obtained for each expression. In general, happy faces were identified more accurately, earlier, and faster than other faces, whereas judgments of fearful faces were the least accurate, the latest, and the slowest. Norms for each face and expression regarding level of identification accuracy, errors, and reaction times may be downloaded from www.psychonomic.org/archive/.

  6. Sensitivity to posed and genuine facial expressions of emotion in severe depression.

    PubMed

    Douglas, Katie M; Porter, Richard J; Johnston, Lucy

    2012-03-30

    The aim of the current study was to investigate whether the ability to distinguish genuine from non-genuine (neutral or posed) facial expressions of emotion (happiness, sadness, fear and disgust) is impaired in depression, and whether improvement in this ability occurs with treatment response. Sixty-eight depressed inpatients and 50 matched healthy controls performed the Emotion Categorisation Task three times over 6 weeks. All participants showed some sensitivity to the meaningful differences between genuine and non-genuine expressions of emotion, with an increasing percentage of faces labelled as genuinely feeling the emotion from neutral to posed to genuine presentations. Depressed patients showed significantly less sensitivity in differentiating non-genuine from genuine expressions of sadness, compared with healthy controls. Performance on the Emotion Categorisation Task did not change over time in treatment responders compared with treatment non-responders. These findings have implications for understanding why depressed individuals may have difficulties in social interactions.

  7. Body cues, not facial expressions, discriminate between intense positive and negative emotions.

    PubMed

    Aviezer, Hillel; Trope, Yaacov; Todorov, Alexander

    2012-11-30

    The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion, positive and negative situations were successfully discriminated from isolated bodies but not faces. Nevertheless, viewers perceived illusory positivity or negativity in the nondiagnostic faces when seen with bodies. To reveal the underlying mechanisms, we created compounds of intense negative faces combined with positive bodies, and vice versa. Perceived affect and mimicry of the faces shifted systematically as a function of their contextual body emotion. These findings challenge standard models of emotion expression and highlight the role of the body in expressing and perceiving emotions. PMID:23197536

  8. Subliminal and supraliminal processing of facial expression of emotions: brain oscillation in the left/right frontal area.

    PubMed

    Balconi, Michela; Ferrari, Chiara

    2012-03-26

    The unconscious effects of an emotional stimulus have been highlighted by a vast amount of research, whereover it remains questionable whether it is possible to assign a specific function to cortical brain oscillations in the unconscious perception of facial expressions of emotions. Alpha band variation was monitored within the right- and left-cortical side when subjects consciously (supraliminal stimulation) or unconsciously (subliminal stimulation) processed facial patterns. Twenty subjects looked at six facial expressions of emotions (anger, fear, surprise, disgust, happiness, sadness, and neutral) under two different conditions: supraliminal (200 ms) vs. subliminal (30 ms) stimulation (140 target-mask pairs for each condition). The results showed that conscious/unconscious processing and the significance of the stimulus can modulate the alpha power. Moreover, it was found that there was an increased right frontal activity for negative emotions vs. an increased left response for positive emotion. The significance of facial expressions was adduced to elucidate cortical different responses to emotional types.

  9. Using the rear projection of the Socibot Desktop robot for creation of applications with facial expressions

    NASA Astrophysics Data System (ADS)

    Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.

    2016-08-01

    This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.

  10. Facial expression reconstruction on the basis of selected vertices of triangle mesh

    NASA Astrophysics Data System (ADS)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    Facial expression reconstruction is an important issue in the field of computer graphics. While it is relatively easy to create an animation based on meshes constructed through video recordings, this kind of high-quality data is often not transferred to another model because of lack of intermediary, anthropometry-based way to do so. However, if a high-quality mesh is sampled with sufficient density, it is possible to use obtained feature points to encode the shape of surrounding vertices in a way that can be easily transferred to another mesh with corresponding feature points. In this paper we present a method used for obtaining information for the purpose of reconstructing changes in facial surface on the basis of selected feature points.

  11. Smile to see the forest: Facially expressed positive emotions broaden cognition

    PubMed Central

    Johnson, Kareem J.; Waugh, Christian E.; Fredrickson, Barbara L.

    2011-01-01

    The broaden hypothesis, part of Fredrickson’s (1998, 2001) broaden-and-build theory, proposes that positive emotions lead to broadened cognitive states. Here, we present evidence that cognitive broadening can be produced by frequent facial expressions of positive emotion. Additionally, we present a novel method of using facial electromyography (EMG) to discriminate between Duchenne (genuine) and non-Duchenne (non-genuine) smiles. Across experiments, Duchenne smiles occurred more frequently during positive emotion inductions than neutral or negative inductions. Across experiments, Duchenne smiles correlated with self-reports of specific positive emotions. In Experiment 1, high frequencies of Duchenne smiles predicted increased attentional breadth on a global–local visual processing task. In Experiment 2, high frequencies of Duchenne smiles predicted increased attentional flexibility on a covert attentional orienting task. These data underscore the value of using multiple methods to measure emotional experience in studies of emotion and cognition. PMID:23275681

  12. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study

    PubMed Central

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    “Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206

  13. Effects of face feature and contour crowding in facial expression adaptation.

    PubMed

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  14. Automatic Change Detection to Facial Expressions in Adolescents: Evidence from Visual Mismatch Negativity Responses.

    PubMed

    Liu, Tongran; Xiao, Tong; Shi, Jiannong

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120-200 ms interval. During the time window of 370-450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescent's posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information.

  15. Automatic Change Detection to Facial Expressions in Adolescents: Evidence from Visual Mismatch Negativity Responses.

    PubMed

    Liu, Tongran; Xiao, Tong; Shi, Jiannong

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120-200 ms interval. During the time window of 370-450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescent's posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information. PMID:27065927

  16. Automatic Change Detection to Facial Expressions in Adolescents: Evidence from Visual Mismatch Negativity Responses

    PubMed Central

    Liu, Tongran; Xiao, Tong; Shi, Jiannong

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information. PMID:27065927

  17. Impaired recognition of musical emotions and facial expressions following anteromedial temporal lobe excision.

    PubMed

    Gosselin, Nathalie; Peretz, Isabelle; Hasboun, Dominique; Baulac, Michel; Samson, Séverine

    2011-10-01

    We have shown that an anteromedial temporal lobe resection can impair the recognition of scary music in a prior study (Gosselin et al., 2005). In other studies (Adolphs et al., 2001; Anderson et al., 2000), similar results have been obtained with fearful facial expressions. These findings suggest that scary music and fearful faces may be processed by common cerebral structures. To assess this possibility, we tested patients with unilateral anteromedial temporal excision and normal controls in two emotional tasks. In the task of identifying musical emotion, stimuli evoked either fear, peacefulness, happiness or sadness. Participants were asked to rate to what extent each stimulus expressed these four emotions on 10-point scales. The task of facial emotion included morphed stimuli whose expression varied from faint to more pronounced and evoked fear, happiness, sadness, surprise, anger or disgust. Participants were requested to select the appropriate label. Most patients were found to be impaired in the recognition of both scary music and fearful faces. Furthermore, the results in both tasks were correlated, suggesting a multimodal representation of fear within the amygdala. However, inspection of individual results showed that recognition of fearful faces can be preserved whereas recognition of scary music can be impaired. Such a dissociation found in two cases suggests that fear recognition in faces and in music does not necessarily involve exactly the same cerebral networks and this hypothesis is discussed in light of the current literature.

  18. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants'…

  19. Shades of Emotion: What the Addition of Sunglasses or Masks to Faces Reveals about the Development of Facial Expression Processing

    ERIC Educational Resources Information Center

    Roberson, Debi; Kikutani, Mariko; Doge, Paula; Whitaker, Lydia; Majid, Asifa

    2012-01-01

    Three studies investigated developmental changes in facial expression processing, between 3 years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed "better" classification of expressions of faces wearing…

  20. The Relationship between the Recognition of Facial Expressions and Self-Reported Anger in People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Woodcock, Kate A.; Rose, John

    2007-01-01

    Background: This study aims to examine the relationship between how individuals with intellectual disabilities report their own levels of anger, and the ability of those individuals to recognize emotions. It was hypothesized that increased expression of anger would be linked to lower ability to recognize facial emotional expressions and increased…

  1. The Effect of Gaze Direction on the Processing of Facial Expressions in Children with Autism Spectrum Disorder: An ERP Study

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2010-01-01

    This study investigated the neural basis of the effect of gaze direction on facial expression processing in children with and without ASD, using event-related potential (ERP). Children with ASD (10-17-year olds) and typically developing (TD) children (9-16-year olds) were asked to determine the emotional expressions (anger or fearful) of a facial…

  2. Distinct frontal and amygdala correlates of change detection for facial identity and expression

    PubMed Central

    Achaibou, Amal; Loth, Eva

    2016-01-01

    Recruitment of ‘top-down’ frontal attentional mechanisms is held to support detection of changes in task-relevant stimuli. Fluctuations in intrinsic frontal activity have been shown to impact task performance more generally. Meanwhile, the amygdala has been implicated in ‘bottom-up’ attentional capture by threat. Here, 22 adult human participants took part in a functional magnetic resonance change detection study aimed at investigating the correlates of successful (vs failed) detection of changes in facial identity vs expression. For identity changes, we expected prefrontal recruitment to differentiate ‘hit’ from ‘miss’ trials, in line with previous reports. Meanwhile, we postulated that a different mechanism would support detection of emotionally salient changes. Specifically, elevated amygdala activation was predicted to be associated with successful detection of threat-related changes in expression, over-riding the influence of fluctuations in top-down attention. Our findings revealed that fusiform activity tracked change detection across conditions. Ventrolateral prefrontal cortical activity was uniquely linked to detection of changes in identity not expression, and amygdala activity to detection of changes from neutral to fearful expressions. These results are consistent with distinct mechanisms supporting detection of changes in face identity vs expression, the former potentially reflecting top-down attention, the latter bottom-up attentional capture by stimulus emotional salience. PMID:26245835

  3. Conscious and unconscious processing of facial expressions: evidence from two split-brain patients.

    PubMed

    Prete, Giulia; D'Ascenzo, Stefania; Laeng, Bruno; Fabri, Mara; Foschi, Nicoletta; Tommasi, Luca

    2015-03-01

    We investigated how the brain's hemispheres process explicit and implicit facial expressions in two 'split-brain' patients (one with a complete and one with a partial anterior resection). Photographs of faces expressing positive, negative or neutral emotions were shown either centrally or bilaterally. The task consisted in judging the friendliness of each person in the photographs. Half of the photograph stimuli were 'hybrid faces', that is an amalgamation of filtered images which contained emotional information only in the low range of spatial frequency, blended to a neutral expression of the same individual in the rest of the spatial frequencies. The other half of the images contained unfiltered faces. With the hybrid faces the patients and a matched control group were more influenced in their social judgements by the emotional expression of the face shown in the left visual field (LVF). When the expressions were shown explicitly, that is without filtering, the control group and the partially callosotomized patient based their judgement on the face shown in the LVF, whereas the complete split-brain patient based his ratings mainly on the face presented in the right visual field. We conclude that the processing of implicit emotions does not require the integrity of callosal fibres and can take place within subcortical routes lateralized in the right hemisphere. PMID:24325712

  4. Does facial expressivity count? How typically developing children respond initially to children with autism.

    PubMed

    Stagg, Steven D; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela

    2014-08-01

    Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for expressivity by adults blind to the condition. We further investigated the friendship ratings given by 44 typically developing children to the same videos. These ratings were compared to friendship ratings given to video clips of typically developing children. Results demonstrated that adult raters, blind to the diagnosis of the children in the videos, rated children with autism spectrum disorder as being less expressive than typically developing children. These autism spectrum disorder children were also rated lower than typically developing children on all aspects of our friendship measures by the 44 child raters. Results suggest that impression formation is less positive towards children with autism spectrum disorder than towards typically developing children even when exposure time is brief. PMID:24121180

  5. Distinct frontal and amygdala correlates of change detection for facial identity and expression.

    PubMed

    Achaibou, Amal; Loth, Eva; Bishop, Sonia J

    2016-02-01

    Recruitment of 'top-down' frontal attentional mechanisms is held to support detection of changes in task-relevant stimuli. Fluctuations in intrinsic frontal activity have been shown to impact task performance more generally. Meanwhile, the amygdala has been implicated in 'bottom-up' attentional capture by threat. Here, 22 adult human participants took part in a functional magnetic resonance change detection study aimed at investigating the correlates of successful (vs failed) detection of changes in facial identity vs expression. For identity changes, we expected prefrontal recruitment to differentiate 'hit' from 'miss' trials, in line with previous reports. Meanwhile, we postulated that a different mechanism would support detection of emotionally salient changes. Specifically, elevated amygdala activation was predicted to be associated with successful detection of threat-related changes in expression, over-riding the influence of fluctuations in top-down attention. Our findings revealed that fusiform activity tracked change detection across conditions. Ventrolateral prefrontal cortical activity was uniquely linked to detection of changes in identity not expression, and amygdala activity to detection of changes from neutral to fearful expressions. These results are consistent with distinct mechanisms supporting detection of changes in face identity vs expression, the former potentially reflecting top-down attention, the latter bottom-up attentional capture by stimulus emotional salience.

  6. Individual differences in neural activity during a facial expression vs. identity working memory task.

    PubMed

    Neta, Maital; Whalen, Paul J

    2011-06-01

    Facial expressions of emotion constitute a critical portion of our non-verbal social interactions. In addition, the identity of the individual displaying this expression is critical to these interactions as they embody the context in which these expressions will be interpreted. To identify any overlapping and/or unique brain circuitry involved in the processing of these two information streams in a laboratory setting, participants performed a working memory (WM) task (i.e., n-back) in which they were instructed to monitor either the expression (EMO) or the identity (ID) of the same set of face stimuli. Consistent with previous work, during both the EMO and ID tasks, we found a significant increase in activity in dorsolateral prefrontal cortex (DLPFC) supporting its generalized role in WM. Further, individuals that showed greater DLPFC activity during both tasks also showed increased amygdala activity during the EMO task and increased lateral fusiform gyrus activity during the ID task. Importantly, the level of activity in these regions significantly correlated with performance on the respective tasks. These findings provide support for two separate neural circuitries, both involving the DLPFC, supporting working memory for the faces and expressions of others. PMID:21349341

  7. Perception of emotions from facial expressions in high-functioning adults with autism

    PubMed Central

    Kennedy, Daniel P.; Adolphs, Ralph

    2012-01-01

    Impairment in social communication is one of the diagnostic hallmarks of autism spectrum disorders, and a large body of research has documented aspects of impaired social cognition in autism, both at the level of the processes and the neural structures involved. Yet one of the most common social communicative abilities in everyday life, the ability to judge somebody's emotion from their facial expression, has yielded conflicting findings. To investigate this issue, we used a sensitive task that has been used to assess facial emotion perception in a number of neurological and psychiatric populations. Fifteen high- functioning adults with autism and 19 control participants rated the emotional intensity of 36 faces displaying basic emotions. Every face was rated 6 times - once for each emotion category. The autism group gave ratings that were significantly less sensitive to a given emotion, and less reliable across repeated testing, resulting in overall decreased specificity in emotion perception. We thus demonstrate a subtle but specific pattern of impairments in facial emotion perception in people with autism. PMID:23022433

  8. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold r